Skip to main content

Command Palette

Search for a command to run...

VMworld 2014 Roundup: Day 3

Updated
12 min read
VMworld 2014 Roundup: Day 3
D

Cloud, infrastructure, technical, & solution architect from Alberta, Canada. Been working with VMware products since ESX 2, and hold several industry certifications. 9x VMware vExpert.

I ran into some technical difficulties at VMworld last week and wasn't able to continue blogging from the conference. In the spirit of "better late than never", it's time to catch up! Cast your mind, if you will, to one week ago today...

Tuesday was a bit of a second wind. The initial jet lag has dissipated, my feet have begrudgingly accepted their fate for the week, and the tasty sessions have begun.

General Session

Some standouts from this session for me included the example hospital scenario with "Dr." Kit Colbert. I liked the notion of medical staff going from their Mac, to their iPad, to a shared device in the patient's room. Authenticated via badge (smart card). Strip away the medical aspect and this example applies, by and large, to most companies large and small. The ability to begin work on a main machine (typically a laptop of some sort) and then transition easily to a media tablet foster a boost in productivity for sure. Interesting, but necessarily having a universal use case, the hospital example had 3D accelerated VDI going on on the shared machine in the patient's room. It was noted that this device could even be a thin client of some sort. Again, interesting, but I don't think there's an obvious use case for the average business here.

Cloud volumes sound interesting as well, providing a centrally managed means of defining and deploying applications to systems. Conceptually, at a high level at least, this sounds a bit like an "app store" type solution, only facing the IT support staff more than the end-user. The demo went through the deployment of 21 applications simultaneously to a desktop in seconds. Potentially impressive, however I'd like to see under the hood a bit first to get an understanding of what's going on.

Project Fargo, besides having the coolest project name of the conference, is aiming to optimize the speed and size of VMs with a focus on squeezing as much out of deployment as possible. The claim made was that a new VM can be deployed and powered on in under a second. This sounds like it'll need some horsepower on the back-end, especially storage, so it's another potentially interesting solution, pending some deep dive understanding.

EVO:RAIL is the biggest of the VMware buzz this year. A reference tech spec and architecture that partners can use to build certified appliances. Very interesting, and very nice to see. Many will claim that this solution is too late to the game, and that it provides massive validation to hyper-converged infrastructure. Both of those claims are correct, however VMware does have an opportunity to play "catch up" with some of their co-opetition and they have indeed validated hyper-converged as a viable and recommended approach. More on EVO:RAIL farther below. RAIL's big brother EVO:RACK was also introduced, now in technology preview stage. What EVO:RAIL is for SMB, EVO:RACK is intended to be for enterprise. It was suggested during the general session that EVO:RAIL is representative of virtual infrastructure while EVO:RACK is representative of cloud infrastructure. It'll be worth keeping an eye on this new family of products.

Some vSphere 6 beta tidbits were also shared, including an FT evolution bringing support for VMs up to 4 vCPUs, as well as confirmation that vCenter to vCenter vMotion as well as long distance vMotion are in the works. NSX was pegged as being opportune for helping the network properties of the VMs "travel with" with VM when it migrates using one of these options, but providing new and different opportunities for handling workloads.

Another technology validation was that of containerization. VMware went so far as to almost explicitly credit Docker with revitalizing containers. It was made very clear on stage that VMware does not believe that containers will replace VMs and that the thought was "fundamentally wrong". Strong words, however it does seem like running containers on top of VMs rather than "replacing" VMs should allow for the highest amount of flexibility. An interesting mention is that VMware has been "doing" containers for over 3 years now with Cloud Foundry.

Last, but not least, micro segmentation, which is achieved via controlled communications paths in NSX, looks to be a potential way to avoid firewall rule hell, or at least a way to defer some of the pain to another solution. As someone currently undergoing network security transformation in my day-to-day work this peaked my interest but like most of the solutions presented this week, we'll just have to wait and see.

SDDC1337: EVO:RAIL Technical Deep Dive

I was fortunate to be able to get a seat in this class with its clever session number. Duncan Epping (@DuncanYB) and Dave Shanley (@daveshanley) treated the room to a provisioning walk-thru, showing off how polished and straight forward the EVO:RAIL interface is. It really had the vibe of a well designed consumer router UI experience and in my opinion seems targeted at those businesses that are upper end SMB. They can cut a reasonable P.O. to acquire quality gear, however they may be running a no-person or one-person IT department and can benefit from the extreme ease of use that EVO:RAIL provides. If there's a need to do some work "under the hood" as it were, the regular vSphere Web Client interface is still available.

VMware has set some hardware specifications for equipment bearing the EVO:RAIL moniker. Each hyper-converged infrastructure appliance (HCIA, or "the box") is a 2U/4N form factor. Meaning there are 4 nodes within the space of 2U. This gives you four ESXi hosts per HCIA. EVO:RAIL "auto-scales" to a maximum of four HCIAs, giving you a maximum of 16 ESXi nodes. Note that in addition the HCIA(s) you'll also need a top-of-rack switch to offer connectivity to both the rest of your network and between HCIAs. Each HCIA node should ship with dual 6-core Intel processors, 192 GB of RAM and three 1.2 TB drives (for a total of about 13.1TB usable space per HCIA). It's also important to note that to take advantage of VSAN with the EVO:RAIL you need to support L2MC, plus the new EVO:RAIL engine requires IPv6 for such things as autodiscover and mDNS.

From the software end of things EVO:RAIL includes vSphere 5.5. Enterprise Plus, VSAN, Log Insight, the EVO:RAIL engine mentioned earlier plus maintenance and support. The EVO:RAIL interface is pure HTML5 (no Flash!) and that includes the VM console. Updating of EVO:RAIL and its software is done via a new update mechanism where the appropriate files (presumably in the form of "patches" or "updates") are uploaded to the HCIA through the EVO:RAIL interface. This suggests that admins will have to download the requisite files to their local systems or network drives and then follow a manual process to "kick off" the updates. Once uploaded the update processes itself is automated as well as non-disruptive since each node of the HCIA is automatically evacuated, put into maintenance mode and updated sequentially.

From initial implementation to first VM is clocked in as little as 15 minutes, which is pretty impressive. Assisting this process is a factory customization experience where default values are identified and applied at time of purchase. Once the EVO:RAIL HCIA is received, if there is no need to differ from the defaults, then there is a "Just Go" option that automates the full deployment of the HCIA. Otherwise customization can be made by going through the DVR-like implementation process. Alternatively a customized config file, in JSON format, can be uploaded at the time of implementation to apply customized settings all at once.

Some of the VM provisioning workflow, as well as having a really slick and intuitive UI, introduces some new features. There are guest sizes defined out of the box for the administrator to chose from when deploying a new VM. They're labelled Small, Medium and Large and the specs for each vary depending on the VM type (based on guest OS). There's also a step for the admin to chose a security policy level which is based on the Risk Profiles within the vSphere Security Hardening Guide. We were told that William Lam (@lamw) is to thank for getting that feature included. Both of these would be welcome additions to the vSphere Web Client, especially the security policy option. Anything to help foster the adoption of more specific and considered security stances is a plus, in my book.

All-in-all the EVO:RAIL offering is quite promising and it should be interesting to see how well VMware's hardware partners deliver against initial expectations. I'm reservedly hopeful that some of the more tight-margin focused partners might be able to deliver an EVO:RAIL HCIA that becomes palatable as a (higher end) lab-in-a-box.

SDDC1600 - Art of Infrastructure Design

This session continued for me the theme of infrastructure design and the path to VMware Certified Design Expert certification. This was a panel discussion with some audience questions towards the end. Similar to the VCDX Bootcamp on the previous Saturday, an overview was provided of the VCDX program. There was a discussion of various points to watch out for and peer and study groups were emphasized as excellent approaches to improve your chances of success.

This session was very well done and there was some variety in message, however it overlapped quite well with the VCDX Bootcamp, as I mentioned, so I'll keep the summary here brief.

VMworld Hands-on Labs

I spent a few hours in the Hands-on Labs Tuesday afternoon. After taking the EVO:RAIL deep dive session I needed to give the UI a spin for myself and so fired up the EVO:RAIL lab (available exclusively at VMworld for the time being). Of course with this being a lab the EVO:RAIL engine, while the actual engine itself, was not backed by HCIA hardware and was likely running nested within the HOL infrastructure.

I'm happy to report that the deployment of EVO:RAIL performed just as smoothly and efficiently in the lab as it did during the demo earlier in the day. The software components are installed and configured from scratch as the implementation proceeds (no pre-existing templates or "almost complete" installations, according to those on the EVO:RAIL team) and I was able to go from zero to having two VMs deployed in about 10-12 minutes.

I also took the VMware Air Lab for vSphere Administrators lab for a tour. I suppose I was expecting that it would be more current than it was, given the VMware Air announcement this week, however it was simply the vCHS lab that had been available before, prefaced by a "anywhere you see vCHS replace that with VMware Air" notice (paraphrased, of course). It's a good lab, especially if you're not yet familiar with vCHS/Air however I had gone through an earlier iteration of this lab before VMworld.

A couple of interesting observations during my HOL time. The EVO:RAIL lab, being one of the newer labs, was run on a Windows 8 VM which was running Classic Shell to deliver a Windows 7-style start menu. I thought that was an inspired choice and not uncoincidentally is how I prefer my Windows 8 experience these days. I also have to call out the CloudCred experience at the HOL this year. The QR code and "redemption" code solution worked well, with a smartphone or tablet easily picking up the QR code and letting CloudCred players handily score points for labs taken. Well done HOL and CloudCred teams!

STO1965 - VVOLs Technical Deep Dive

My last session of the day was presented by Suzy Visvanathan and Rawlinson Rivera (@PunchingClouds). The session started off with a high level overview of VVOLs. Let's see if I got my head wrapped around this properly.

Today's storage system acts as both access point and actual storage for data. The high level concept behind VVOLs is taking the access point away from the storage provider and having it replaced by software within vSphere. Within vSphere there will be a set of Protocol Endpoints (PEs). These PEs will be protocol "agnostic" in that they'll support all storage protocols. There are SCSI PEs, NFS PEs, The Protocol Endpoints are discovered in various ways (SCSI PEs on rescan, NFS PEs maintained as IP or file paths upon config) and then stored within a DB by ESXi.

By using PEs to abstract the storage access point, we can now do some interesting things with data storage. Rather than be constrained by the storage solution's logical constructs, such as the LUN, vSphere can define its own "units" of data and have the storage solution save them. In practice this would mean that a VM would break down into a "unit" comprising the VM config files (essentially the VM's definition) as well as a "unit" per virtual hard disk. These are then stored as individual items on the storage solution. In effect, we're now dealing with object-based storage. Where each of the "units", as I've called them, is an individual object tracked by vSphere and stored by the storage solution. The role of the storage solution is now that of Storage Container (SC).

Initially this allows for some simply improvements. For example today if you wanted to use storage/SAN based snapshots on a block-based storage solution against a VM you would have to snapshot an entire LUN. With object-based storage only the VM particular objects would have to have snapshots. This approach, which is closer in concept to how the vSphere-level snapshots work today means that all snapshots could conceivably be offloaded to the storage array for a performance boost.

It also draws the lines of responsibility slightly differently for infrastructure admins. Storage admins will define and deploy containers with the desired capabilities (think storage tiers). Virtualization admins will provision VMs which use the Storage Container appropriate to their needs.

Note that datastores won't be extinct just yet. They'll stick around for sometime and will continue to be required to make sure that non-VVOL aware applications and components can still have their storage needs met. Datastores will effectively be the same as their requisite Storage Container at this point. Maybe in the future pure SCs will take over from datastores but today they're too ubiquitous to overhaul and/or replace completely.

As I mentioned, we will see some initial efficiency and optimization during VVOL adoption. This will be especially true as new and interesting ways of leverage Storage Based Policy Management (SBPM) are applied to VVOLs, which apply on a per-VVOl versus per-LUN level. I think what's really exciting about VVOLs are the potential applications that haven't been discovered or tapped into yet. This is potentially paradigm shifting given enough adoption and focus.

Quick Wrapup

So that's (last) Tuesday in a nutshell. There were some after-conference events attended, but they don't need detailing here. I'll continue posting Roundups this week to catch up from where we were last week. See you then!

More from this blog

T

T.B.D.

95 posts

T.B.D. - There Be Dragons - explores infrastructure, cloud, AI, virtualization, and all things technology. We'll also look at enterprise architecture, and the implications of tech on the enterprise.