VMworld 2014 Roundup: Day 3
VMworld Hands-on Labs
I spent a few hours in the Hands-on Labs Tuesday afternoon. After taking the EVO:RAIL deep dive session I needed to give the UI a spin for myself and so fired up the EVO:RAIL lab (available exclusively at VMworld for the time being). Of course with this being a lab the EVO:RAIL engine, while the actual engine itself, was not backed by HCIA hardware and was likely running nested within the HOL infrastructure.
I’m happy to report that the deployment of EVO:RAIL performed just as smoothly and efficiently in the lab as it did during the demo earlier in the day. The software components are installed and configured from scratch as the implementation proceeds (no pre-existing templates or “almost complete” installations, according to those on the EVO:RAIL team) and I was able to go from zero to having two VMs deployed in about 10-12 minutes.
I also took the VMware Air Lab for vSphere Administrators lab for a tour. I suppose I was expecting that it would be more current than it was, given the VMware Air announcement this week, however it was simply the vCHS lab that had been available before, prefaced by a “anywhere you see vCHS replace that with VMware Air” notice (paraphrased, of course). It’s a good lab, especially if you’re not yet familiar with vCHS/Air however I had gone through an earlier iteration of this lab before VMworld.
A couple of interesting observations during my HOL time. The EVO:RAIL lab, being one of the newer labs, was run on a Windows 8 VM which was running Classic Shell to deliver a Windows 7-style start menu. I thought that was an inspired choice and not uncoincidentally is how I prefer my Windows 8 experience these days. I also have to call out the CloudCred experience at the HOL this year. The QR code and “redemption” code solution worked well, with a smartphone or tablet easily picking up the QR code and letting CloudCred players handily score points for labs taken. Well done HOL and CloudCred teams!
STO1965 – VVOLs Technical Deep Dive
My last session of the day was presented by Suzy Visvanathan and Rawlinson Rivera (@PunchingClouds). The session started off with a high level overview of VVOLs. Let’s see if I got my head wrapped around this properly.
Today’s storage system acts as both access point and actual storage for data. The high level concept behind VVOLs is taking the access point away from the storage provider and having it replaced by software within vSphere. Within vSphere there will be a set of Protocol Endpoints (PEs). These PEs will be protocol “agnostic” in that they’ll support all storage protocols. There are SCSI PEs, NFS PEs, The Protocol Endpoints are discovered in various ways (SCSI PEs on rescan, NFS PEs maintained as IP or file paths upon config) and then stored within a DB by ESXi.
By using PEs to abstract the storage access point, we can now do some interesting things with data storage. Rather than be constrained by the storage solution’s logical constructs, such as the LUN, vSphere can define its own “units” of data and have the storage solution save them. In practice this would mean that a VM would break down into a “unit” comprising the VM config files (essentially the VM’s definition) as well as a “unit” per virtual hard disk. These are then stored as individual items on the storage solution. In effect, we’re now dealing with object-based storage. Where each of the “units”, as I’ve called them, is an individual object tracked by vSphere and stored by the storage solution. The role of the storage solution is now that of Storage Container (SC).
Initially this allows for some simply improvements. For example today if you wanted to use storage/SAN based snapshots on a block-based storage solution against a VM you would have to snapshot an entire LUN. With object-based storage only the VM particular objects would have to have snapshots. This approach, which is closer in concept to how the vSphere-level snapshots work today means that all snapshots could conceivably be offloaded to the storage array for a performance boost.
It also draws the lines of responsibility slightly differently for infrastructure admins. Storage admins will define and deploy containers with the desired capabilities (think storage tiers). Virtualization admins will provision VMs which use the Storage Container appropriate to their needs.
Note that datastores won’t be extinct just yet. They’ll stick around for sometime and will continue to be required to make sure that non-VVOL aware applications and components can still have their storage needs met. Datastores will effectively be the same as their requisite Storage Container at this point. Maybe in the future pure SCs will take over from datastores but today they’re too ubiquitous to overhaul and/or replace completely.
As I mentioned, we will see some initial efficiency and optimization during VVOL adoption. This will be especially true as new and interesting ways of leverage Storage Based Policy Management (SBPM) are applied to VVOLs, which apply on a per-VVOl versus per-LUN level. I think what’s really exciting about VVOLs are the potential applications that haven’t been discovered or tapped into yet. This is potentially paradigm shifting given enough adoption and focus.
So that’s (last) Tuesday in a nutshell. There were some after-conference events attended, but they don’t need detailing here. I’ll continue posting Roundups this week to catch up from where we were last week. See you then!