T.B.D. https://teebeedee.org There Be Dragons Mon, 23 Oct 2017 19:44:05 +0000 en-US hourly 1 67261935 VMworld 2017 Roundup: Day 5 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-5/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-5/#comments Fri, 01 Sep 2017 00:19:15 +0000 https://teebeedee.org/?p=1536 The last (half) day of VMworld 2017 US. As always, it’s bitter-sweet. There’s no coverage of the General Session this morning as I was touring the Switch data centre here in Las Vegas instead. Tour of the Switch Data Center, Las Vegas This...

The post VMworld 2017 Roundup: Day 5 appeared first on T.B.D..

]]>
The last (half) day of VMworld 2017 US. As always, it’s bitter-sweet. There’s no coverage of the General Session this morning as I was touring the Switch data centre here in Las Vegas instead.

Tour of the Switch Data Center, Las Vegas

This morning I was invited by a collection of companies to take a tour of the Switch data center here in Las Vegas. I joined a group of about 20 or so other invitees, who all boarded a shuttle from the Mandalay Bay convention center that would take us to the tour. As soon as we approached the complex it was clear that Switch takes their business seriously. The shuttle passed through a gated entrance to a loading area next to one of the Switch buildings, which surrounded by fencing about 10-12 feet high with a discouraging spiky top. Switch branded security staff (all of whom are highly skilled and trained former military folks) were on hand to guide us as soon as we stepped out of the shuttle.

The procedure for gaining entrance to the facility was fairly standard based on my previous data center experiences: DC staff hold on to your ID and give you a visitors badge, and you are then badged in through a formidable-looking full height turnstile. The entrance area and proceeding hallways were very nicely designed, from an aesthetic viewpoint. Coloured lighting, clean considered flooring, and clever designs made from the powder-coated cable trays and metal piping that’s used in areas of the DC itself.

The group was lead to an event room which consisted of tiered home-theatre style seating (plush reclining chairs linked into rows), with very modern A/V gear up front including a large projection screen. The various companies involved in arranging the tour, Evolve IP, HPE, Veeam and Ingram Micro, each gave us a brief overview of themselves before relinquishing the floor to a Switch rep, who provided information about their operation and fielded questions.

Switch’s Las Vegas facility was claimed to be one of the most advanced data centres in the country. It’s the west coast home for Switch (their east coast home is in Philadelphia). By square footage, it’s the largest data centre in the world, as about 2.2 million square feet, including building 11 which is not quite open yet (our tour was in building 7). Each building on the campus is about 300 thousand square feet (several football fields worth), and each is capable of dealing with 100 megawatts of power.

Due to innovations pioneered by Switch, and in fact designed by Rob Roy, the owner, who holds 350 claims and patents, Switch is able to reach higher density in their DCs compared to other providers. In fact, they can squeeze 38-42 kW per rack, and some clients are working on achieving about 60+ kW per rack. This is achievable chiefly due to Switch’s unique circulation and cooling system.

The air handlers, which are key to circulation and cooling, have been designed in a modular fashion so that they can be produced and integrated into the facility on-demand. Each air handler also has a weather station built-in, so that it can programatically decide which of six available modes it will run in based on current environmental conditions.

The data centres are built with catastrophe in mind. For example the walls are ballistics resilient to protect against possible projectiles from an explosion of some kind, or from war, or perhaps even a meteor strike. Due to this engineering approach, the Switch facilities have not contributed to a single second of customer impact over the 17 years they’ve been operating.

It was clear throughout the tour that a lot of thought and consideration has gone into the design and production of these facilities, and that same mindset is in place in dealing with it operationally as well. It was an impressive facility, and an impressive tour. Costs for leasing space in the facilities weren’t discussed, but I’d say it’s safe to say that if you can afford it, you can’t go wrong.

The post VMworld 2017 Roundup: Day 5 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-5/feed/ 5 1536
VMworld 2017 Roundup: Day 4 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-4/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-4/#comments Thu, 31 Aug 2017 09:23:07 +0000 https://teebeedee.org/?p=1534 VMworld “hump day” is in the bag. This is the day that’s sort of in the middle of VMworld and is distinguished by the lack of General Session in the morning and Customer Appreciation event in the evening. Let’s recap. vSphere 6.5 Host...

The post VMworld 2017 Roundup: Day 4 appeared first on T.B.D..

]]>
VMworld “hump day” is in the bag. This is the day that’s sort of in the middle of VMworld and is distinguished by the lack of General Session in the morning and Customer Appreciation event in the evening. Let’s recap.

vSphere 6.5 Host Resources Deep Dive: Part 2 [SER1872BU]

Last year was part one of  Frank Denneman and Niels Hagoort‘s vSphere 6.5 Host Resource Deep Dive. This year they focused on CPU related concerns.

First up, Frank focused on non-uniform memory access, or NUMA for short. He pointed out that the best user experience is obtained by providing consistent performance. One of the ways of achieving that is by reducing “moving parts” (complexity). Also, don’t forget that when troubleshooting performance in a virtual environment that you shouldn’t only focus on the affected VM(s). Other VMs/components that share resources and/or hosts could be contributing or also affected.

There’s an advanced setting, 

Numa.LocalityWeightActionAffinity
 , that you can consider if it turns out that CPU assignment isn’t balanced well for your particular workload across NUMA boundaries. The default is 130 (but 130 what Frank hasn’t found out yet). Set the value of this setting to zero to prevent NUMA load balancing. This is a per host setting, so it would have to be set on every host in the cluster.

Note that, as an advanced setting, this shouldn’t be adjusted unless you’ve determined that you definitely have a performance concern that would be improved by setting this. The same is true for any advanced setting. Setting an advanced setting to something other than the default should be seriously considered, and with good reason, otherwise those wouldn’t be the default values!

Frank pointed out the when you read PCPU in VMware’s documentation that it refers to either a physical processor core or hyper-threading. There’s no guarantee of one or another out-of-the-box as ESXi decides how to schedule CPU time. If a vCPU is scheduled to run on a physical core, it’s considered to be charged for 100% of its CPU time. If a vCPU is scheduled to run on hyper-threading “core”, it’s considered to be charged 62.5% of its CPU time, meaning that it’s immediately put back into the scheduling queue to satisfy the outstanding 37.5%. This is to make sure that if a vCPU is scheduled to run on HT that it ultimately gets the same CPU time as a vCPU scheduled to run on a core, however being scheduling multiple times does impact performance.

The NUMA scheduler will balance workloads across NUMA nodes for VMs where vCPUs are less than the NUMA boundary. For large VMs (from a vCPU view), try to mirror the physical NUMA domains for absolute best performance. The NUMA scheduler will use the CPU core count for client sizing as well. There is a per-VM advanced setting, 

numa.vcpu.preferHT
  that, when set to True will make the NUMA scheduler consider HT in addition to cores. This can keep a VM within a NUMA boundary but, like all advanced settings, you should have a good reason to change this behaviour for a particular VM. This may be good for VMs that can benefit from maintaining memory locality, but not for those VMs with high CPU load. See VMware KB 2003582 for more info.

The NUMA client always considers CPU cores, and not memory locality. The advanced setting, 

numa.consolidate
 , when set to False will help distribute based on memory. Did we mention being careful about advanced settings? Pretty sure we did.

Frank covered the impact of CPU sizing on storage performance, and how selection of storage for particular use cases (such as vSAN caching tier versus capacity tier) needs to be considered along with your CPU to ensure best performance. Hint: Intel Optane with 3D Xpoint makes for an excellent vSAN caching tier but is wasted on the capacity tier.

Neils then discussed the impacts of CPU utilization on network latency. Traditionally NICs rely on interrupts to have packets picked up by the CPU and put into memory. If the CPU is heavily utilized then interrupt generation can be affected and in turn introduce latency on the local NIC. A newer technology, vRDMA aims to address this by allowing NICs to directly access memory, bypassing the ESXi kernel entirely. This isn’t broadly available yet as it requires newer hardware (or at least updated firmware) and, naturally, vendor support.

Oh, and, if you haven’t already, don’t forget to pick up the book!

The post VMworld 2017 Roundup: Day 4 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-4/feed/ 5 1534
VMworld 2017 Roundup: Day 3 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-3/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-3/#comments Thu, 31 Aug 2017 01:30:48 +0000 https://teebeedee.org/?p=1532 The second full day of VMworld was one of the fullest. Onward! General Session For general session commentary, check out the Live Blog. The super-condensed version: Pivotal and VMware announce Pivotal Container Service (PKS), a joint offering that was developed in partnership with Google...

The post VMworld 2017 Roundup: Day 3 appeared first on T.B.D..

]]>
The second full day of VMworld was one of the fullest. Onward!

General Session

For general session commentary, check out the Live Blog. The super-condensed version:

  • Pivotal and VMware announce Pivotal Container Service (PKS), a joint offering that was developed in partnership with Google Cloud (the ‘K’ alludes to Kubernetes),
  • VMware showed off the new PKS as well as AppDefense and Wavefront, via a fictitious customer story, and
  • VMware Workspace ONE is now available for trial.

NetApp HCI, Simplicity AND Enterprise Scale [STO3308BUS]

Gabe Chapman started off the session by giving an overview of hyper-converged infrastructure (HCI). In theory HCI is software-defined compute, network and storage in a single package. In reality there have been (and still are, in cases) challenges with that approach. First generation HCI solutions had to compromise in the areas of performance, flexibility and consolidation to deliver a “single-box” product. This was often rooted in the limited ways in nodes were engineered to make the various infrastructure components available. If you needed more storage, you were forced to consume more compute, whether you needed it or not, and vice-versa if you just needed more compute and not storage. Depending on your workloads, the addition of more compute can increase your per-CPU licensing requirements (ex. Oracle, vSphere, Microsoft SQL) which means that extra storage you needed just got much more expensive.

NetApp’s HCI offering is built on top of SolidFire Element OS, which takes care of software-defined storage, and VMware vSphere, which deals with software-defined compute (i.e. virtualization). These are tied together by the NetApp Deployment Engine, which reduces the steps and effort in deploying the solution. If a customer wants to leverage NetApp HCI for deliver file services, they can deploy a virtualized OnTap instance (NetApp ONTAP Select vNAS) that will make those services available while the customer can leverage the benefits of block-storage on the back-end.

NetApp HCI will be delivered as a four (4) node chassis. You need at least two compute nodes and two storage nodes, however you’re free to extend nodes as necessary after that. Nodes are available in “t-shirt” sizes, meaning there are three sizes of compute nodes and three sizes of storage nodes to choose from.

Within NetApp HCI, the client can specify performance guarantee’s per application through specifying minimum, maximum and burst IOPs. This allows you to create walled gardens around your apps, to make sure that you aren’t as affected by “noisy neighbour” syndrome. If VVOLs are deployed, then these guarantees can be specified per VM, otherwise they’d be applied per VMDK. Thinking about the practicality of this, I’m thinking that there’s likely going to need to be some sort of show-/charge-back to business units that ask for high IOPs guarantees. Otherwise all the business services will ask for high guarantees.

Gabriel cautions about watching out for what he calls the “HCI tax”. He’s seen that some HCI offerings deploy controller VMs that use anywhere between 16-128 GB of RAM and 4-8 CPUs each. This controller VMs take away from the resources available to VMs within each HCI node. Also watch out for some HCI solutions that do support deploying vCenter directly on the HCI solution, meaning you would need to find more infrastructure to house vCenter.

Back to the NetApp Deployment Engine (NDE), it is capable of deploying Element OS, ESXi, vCenter with relevant HCI plugins, and a single monitoring VM to the NetApp HCI environment. NetApp HCI has administrators manage their daily operations through the familiar vCenter interface, but also provides a REST-based API. The NetApp Data Fabric that is also part of NetApp HCI means that the solution can be integrated with AWS & Azure. The NetApp pedigree means that familiar NetApp features, like SnapMirror, can be leveraged to migrated storage/VMs from HCI to and from other NetApp-capable targets.

NetApp HCI general availability is targeted for October 27th of this year.

The post VMworld 2017 Roundup: Day 3 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-3/feed/ 5 1532
VMworld 2017 Roundup: Day 2 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-2/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-2/#comments Wed, 30 Aug 2017 21:06:23 +0000 https://teebeedee.org/?p=1530 VMworld’s first full day of sessions and such delivered. Here we go. General Session For general session commentary, check out the Live Blog. To sum it up quick: VMware Cloud on AWS was officially released, VMware’s betting big on NSX by pushing it beyond...

The post VMworld 2017 Roundup: Day 2 appeared first on T.B.D..

]]>
VMworld’s first full day of sessions and such delivered. Here we go.

General Session

For general session commentary, check out the Live Blog. To sum it up quick:

  • VMware Cloud on AWS was officially released,
  • VMware’s betting big on NSX by pushing it beyond on site and cloud data centres, extending it to containers, app platforms, and more, and
  • VMware introduced AppDefense, which is their new security play.

Meet the vCenter Server Experts Panel [SER1440PU]

The first session of the week for me was a Q&A panel, featuring Mike Foley, Adam Eckerle, Kyle Ruddy, Dilpreet Bindra, and Emad Younis. They started things off by mentioning that the vSphere web client is going away, with a sun setting announcement that was delivered last week. vCenter Server on Windows is also going to follow the same path, with the VCSA standing as the successor. There hasn’t been an official date set yet, however it’s expected that these products will be retired as of the next major release of vSphere. If you’re still on vCenter Server for Windows, be sure to look at the migration tool as it will help you seamlessly replace it with VCSA.

There was a big of panel discussion among themselves, highlighting some vSphere 6.5 benefits (such as VM encryption, secure boot), the ongoing transition from SOAP-based to RESTful APIs, and that PowerCLI modules are being evaluated and re-written to make them better and faster (case in point, Get-Vm). It’s recommended to run the latest PowerCLI version and not stick with the version of PowerCLI that matches your vSphere deployment, as these kinds of improvements can often be “retroactively” available to older vSphere environments, since it’s the PowerCLI code itself being improved. Also, check out the VMware API Explorer to browse, search and inspect the various VMware APIs.

The audience was then invited to pose questions to the panel:

  • Q: Will UUIDs be accessible via APIs?
  • A: That’s coming in the future; VMware is interested to know what use cases customers have for the UUID attribute so they can better understand the need and deliver.
  • Q: Will vCenter be able to be upgraded via VUM?
  • A: That functionality’s being worked on.
  • Q: VAMI can manually backup but not be scheduled at the moment, is that coming?
  • A: Scheduling can be handled externally (ex. via PowerShell) at the moment, but it is being worked on.
  • Q: Is VMware getting rid of the PSC? Certificates are especially difficult to manage.
  • A: The number one use case for external PSC deployment is the need for vCenter Enhanced Linked Mode. VMware is working on making Enhanced Linked Mode available with embedded PSCs. In the meantime, if customer’s don’t need Enhanced Linked Mode they should stick with embedded PSC.
  • Q: Is high availability (HA) being brought to PSC?
  • A: For external PSC the official guidance is to use an external load balancer. If you use the embedded PSC it can be covered by vCenter HA.
  • Q: Can vCenter warn is the PSC isn’t at the right version before vCenter itself is updated?
  • A: This exists in 6.5 already.
  • Q: Can SSO domains be merged (for those customers who have made mistakes, for instance)?
  • A: It’s being worked on for 6.x. If you’re still on 5.x then vCenter can consolidate SSO domains before the upgrade to 6.x. Once a vCenter component is upgraded to 6.x, however, the SSO domains are “locked in”.
  • Q: Why is FT limited to 4 CPUs? Can it be increased?
  • A: The limit is based on engineering considerations, however increasing the FT maximums is being worked on. No dates can be provided at this time.
  • Q: When will the new vSphere Client be feature complete?
  • A: It’s now about 95% feature complete with the 6.5u1 release, and VMware is working on reaching 100% feature parity with the vSphere Web Client.
  • Q: Are there any plans to increase the maximum cluster nodes for DRS (currently at 64)?
  • A: It’s in the works.
  • Q: Some Auto Deploy features are still PowerCLI only, will they be added to the vSphere Client?
  • A: It’s on the road map.
  • Q: Is certificate installation & management being improved? Not all companies can make vCenter an intermediate CA.
  • A: That limitation is understood. A hybrid approach is recommended where the certs are replaced on the PSC and VCSA, and then those components are allowed to mange the certificates installed on the hosts. This secures access to vCenter itself, which would be needed to attempt to get to the host interfaces anyway.

The panel ended with a quick note that vCenter HA is not the same as vCenter DR, so plan for both.

The post VMworld 2017 Roundup: Day 2 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-2/feed/ 5 1530
VMworld 2017 Live Blog: Day 3 https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-3/ https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-3/#comments Tue, 29 Aug 2017 15:58:03 +0000 https://teebeedee.org/?p=1541 Welcome to the VMworld 2017 US Tuesday General Session live blog! Tuesday General Session Note: To view the live blog you need to read this post on https://teebeedee.org. If you read it some other way, for instance via email subscription or RSS feed,...

The post VMworld 2017 Live Blog: Day 3 appeared first on T.B.D..

]]>
Welcome to the VMworld 2017 US Tuesday General Session live blog!

Tuesday General Session

Note: To view the live blog you need to read this post on https://teebeedee.org. If you read it some other way, for instance via email subscription or RSS feed, it may appear to be empty.

The post VMworld 2017 Live Blog: Day 3 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-3/feed/ 6 1541
VMworld 2017 Live Blog: Day 2 https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-2/ https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-2/#comments Mon, 28 Aug 2017 15:57:17 +0000 https://teebeedee.org/?p=1538 Welcome to the VMworld 2017 US Monday General Session live blog! Monday General Session Note: To view the live blog you need to read this post on https://teebeedee.org. If you read it some other way, for instance via email subscription or RSS feed,...

The post VMworld 2017 Live Blog: Day 2 appeared first on T.B.D..

]]>
Welcome to the VMworld 2017 US Monday General Session live blog!

Monday General Session

Note: To view the live blog you need to read this post on https://teebeedee.org. If you read it some other way, for instance via email subscription or RSS feed, it may appear to be empty.

The post VMworld 2017 Live Blog: Day 2 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-live-blog-day-2/feed/ 6 1538
VMworld 2017 Roundup: Day 1 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-1/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-1/#comments Mon, 28 Aug 2017 15:56:44 +0000 https://teebeedee.org/?p=1528 Sunday’s typically the day many attendees show up for VMworld. There are a few sessions, TAM stuff, partner stuff, but it’s fairly light over all. Registration went very smoothly this year, with the badge and materials pick-ups handled separately. It wasn’t without flaws,...

The post VMworld 2017 Roundup: Day 1 appeared first on T.B.D..

]]>
Sunday’s typically the day many attendees show up for VMworld. There are a few sessions, TAM stuff, partner stuff, but it’s fairly light over all. Registration went very smoothly this year, with the badge and materials pick-ups handled separately. It wasn’t without flaws, of course, as a gentlemen in line beside me tried to get his badge, only to be told that he was at the wrong printer line-up and to go to the printers were he scanned his QR code (quick tip: when picking up your badge, you shouldn’t leave the carpet from QR/check-in to badge pick-up).

Opening Acts

This year’s Opening Acts was well executed, once again. Kudos to the crew that puts this on for the community. It’s run just as professionally as any of the official conference offerings. To paraphrase John Mark Troyer, the crowd at Opening Acts is representative of the community and largely acts as proxy for all of the people passionate about virtualization and the surrounding ecosystem.

The first session, Current State of the VMware Ecosystem, the panelists tackled the challenging topic of where the ecosystem is and how it could move forward. This devolved fairly predictably into an assessment of VMware itself. Most concurred that the company is in a sustainment pattern, where not a lot of innovation is originating from within. An interesting suggestion/observation is that VMware is (or should be) considered the “glue” that ties on-prem infrastructure and (multi-)cloud environments. Personally I think this would be a smart play for VMware, positioning themselves not only to be the cloud go-between, but also containers (vSphere Integrated Cloud, anyone?), and “server-less” (ala Amazon Lambda) could be their sweet spot for the next several years.

Some panelists contend that VMware has to figure out how to move past the hypervisor to stay sustainable. The hypervisor will, naturally, remain at the heart of VMware’s tech portfolio, but competent competition means that they can’t necessarily rely on it as the main revenue source. Likewise, the panel also suggested that the virtualization community itself needs to evolve. Again, virtualization may stay at the core, but bringing in more current and interest-driving tech, like Docker and AWS, will be key.

The How Failing Made Me Better panel discussed how failure played a role within their careers. Interestingly, three of the panelists are VCDXs who didn’t achieve the designation on their first try. They learned from that first failure, persevered, and ultimately achieved the coveted label. By and large this panel mutually agreed that failure is an inevitable, and useful, part of anyone’s career. As the saying goes, fail fast and fail often.

Welcome Reception

The annual opening of the Solutions Exchange went well. Among the crush of conference goers, it was nice to see lots of new companies displaying their wares. A little surprisingly AWS had a prominent booth,  though it really shouldn’t be, given the new VMware Cloud on AWS offering. I look forward to chatting with some of the companies a little later in the week, when there’s a bit more breathing room in the hall.

Wrap

Day 1 in the can. Check back for live coverage of the general session and further updates throughout the week.

The post VMworld 2017 Roundup: Day 1 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-1/feed/ 5 1528
VMworld 2017 Roundup: Day 0 https://teebeedee.org/2017/08/vmworld-2017-roundup-day-0/ https://teebeedee.org/2017/08/vmworld-2017-roundup-day-0/#comments Sun, 27 Aug 2017 06:59:34 +0000 https://teebeedee.org/?p=1518 VMworld Day 0. A day of anticipation, travel, and preparation for the wonderfully exhausting week ahead (feet don’t fail me now!). While I’m here cranking out some drafts behind-the-scenes to capture the week’s news, discoveries and mild manias, let’s check out a few...

The post VMworld 2017 Roundup: Day 0 appeared first on T.B.D..

]]>
VMworld Day 0. A day of anticipation, travel, and preparation for the wonderfully exhausting week ahead (feet don’t fail me now!). While I’m here cranking out some drafts behind-the-scenes to capture the week’s news, discoveries and mild manias, let’s check out a few of the things I’m looking forward to.

Sessions

Yes, the sessions get recorded. However there’s nothing like taking in a few anticipated sessions in person. Here are a couple I’m looking forward to catch live.

vSphere 6.5 Host Resources Deep Dive: Part 2 [SER1872BU] – Frank Denneman and Niels Hagoort presented Part 1 last year, which was well-received and packed with deep technical knowledge. Apparently demand for this session is high enough that it’s already been moved to a bigger room! If you’re the kind that likes to read the book before you see the adaptation, you only have a few days to go…

Upgrading to vSphere 6.5 the VCDX Way [SER2318BU] – For a lot of organizations, upgrading to vSphere 6.5 is long overdue (vSphere 5.0 came out over six years ago; just saying…). Two VCDXs, Rebecca Fitzhugh and Melissa Palmer, take us through the upgrade process including design and planning. It sounds like it’s not simply the old step 1, step 2 procedural approach. I expect to take away plenty of useful notes and tips.

Hackathon

Monday night marks the return of the VMware {code} Hackathon to VMworld US. I’ve gushed before about how this was my favourite thing of VMworld 2016, so expectations are set dangerously high already. My team this year is Need for Speed, captained by the venerable Luc Dekens. Our goal: to bring speed improvements to the legendary vCheck PowerShell script (and maybe a few other tweaks and adjustments…). Tune in later this week to see how well we make out in the scant hours available.

Data Centre Tour

I’ve been lucky enough to be invited by a cadre of companies to tour the Switch data centre here in Las Vegas on Thursday. It might not get everyone excited, but I’m looking forward to seeing how a larger American data centre compares to the Canadian DCs I’ve toured. There’s also just something about the hum of a data centre that fills me with a sense of endless possibility… and cat memes… but mostly possibility.

Last, But Not Least…

Of course what would VMworld be without a visit with the vCommunity. VMvillage, Opening Acts, informal hallway tracks, after hours gatherings, bonding at the Solutions Exchange; what ties it all together are the people. Passionate, friendly, learned and ready to learn. The real gems that make VMworld shine.

As always, please stop and say hi.

The post VMworld 2017 Roundup: Day 0 appeared first on T.B.D..

]]>
https://teebeedee.org/2017/08/vmworld-2017-roundup-day-0/feed/ 5 1518
Architecture Principles: Where do they fit in the RAC model? https://teebeedee.org/2016/11/architecture-principles-fit-rac-model/ https://teebeedee.org/2016/11/architecture-principles-fit-rac-model/#comments Wed, 23 Nov 2016 19:14:47 +0000 https://teebeedee.org/?p=1307 Eric Wright recently asked, “Is open source a true technical requirement?” A great question that Eric explores well, and after reading his article it felt like we could quest a little further. So please, go read Eric’s article if you haven’t already, then pop...

The post Architecture Principles: Where do they fit in the RAC model? appeared first on T.B.D..

]]>
Eric Wright recently asked, “Is open source a true technical requirement?” A great question that Eric explores well, and after reading his article it felt like we could quest a little further. So please, go read Eric’s article if you haven’t already, then pop on back to keep the conversation going.

Everybody Expects the RAC

If you’ve encountered a few decently documented solutions you’ve no doubt found out that RAC is a lynchpin of good design. RAC or, requirements, assumptions and constraints, capture the goals and limitations, verified or not, of the solution in a way that let’s us measure them. Granted, that measurement may be as simple as “met” or “unmet”, but measured none the less.

They’re so ubiquitous in design that they’ve become muscle memory for many. The metric definition becomes, itself, a metric. “Do you have enough RAC in your design?” Sometimes, when asking ourselves whether something is a requirement, or a constraint, or an assumption, I wonder if we shouldn’t be asking ourselves, does it fit the RAC model at all?

It’s the Principles of the Thing

An ask to use Open Source software reads, to me, more like an architecture principle than an explicit requirement. What’s a principle? According to the Open Group, creators of TOGAF, “principles are general rules and guidelines, intended to be enduring and seldom amended, that inform and support the way in which an organization sets about fulfilling its mission.”

From a principle standpoint these considered guidelines become agreed upon and writ into the corporate rule book, or at least IT’s. I’m inclined then, in any technical design document detailing requirements, to specify “R1 – Must adhere to architecture principles”. Does this skirt the discussion about whether these types of considerations are, themselves, distinctly requirements? Yeah, to a point.

Principles inherently read as considered requirements. Imagine writing a brief business case for each of your requirements. Kind of that. But they’re also specific. “Must use OpenStack” is not a direct technical or business requirement. It reads more like a constraint. So principles straddle that line between requirements and constraints.

They’re also not inviolate. They’re guidelines, after all. So while it may be an exception to allow an exception to a principle, it can still happen. As long as the business is getting value out of the solution and reasonable risk levels are maintained.

Think about our example from the opposite standpoint, where a client strongly prefers off-the-shelf solutions with minimal customization. Would OpenStack contradict their principle? As open source software, yes, it would. But it might be the right thing to do given their particular solution’s goals.

What’s Done is Never Done

This is my take on Eric’s question. I’d love to hear what your answer would be. Would you approach it differently? Do you agree that a carefully worded requirement can adroitly include principles in design? How would you do it differently?

Chime in, keep the conversation going. We’ll all be the richer for it.

The post Architecture Principles: Where do they fit in the RAC model? appeared first on T.B.D..

]]>
https://teebeedee.org/2016/11/architecture-principles-fit-rac-model/feed/ 7 1307
Install VMware ESXi 6.5 on Intel NUC (Part 2/2) https://teebeedee.org/2016/11/install-vmware-esxi-6-5-intel-nuc-part-2/ https://teebeedee.org/2016/11/install-vmware-esxi-6-5-intel-nuc-part-2/#comments Mon, 21 Nov 2016 19:54:40 +0000 https://teebeedee.org/?p=1279 In the first post, we created an ISO image of VMware ESXi 6.5 to install on our Intel NUC. Now we turn that ISO into a bootable USB and do the install. Make Bootable USB We’re going to use an application called Rufus...

The post Install VMware ESXi 6.5 on Intel NUC (Part 2/2) appeared first on T.B.D..

]]>
In the first post, we created an ISO image of VMware ESXi 6.5 to install on our Intel NUC. Now we turn that ISO into a bootable USB and do the install.

Make Bootable USB

We’re going to use an application called Rufus to create our bootable USB, so download and install it. Now start Rufus.

Rufus - Select ISO

First, make sure that your to select your target USB flash drive in the Device drop down menu. Then, change the “Create a bootable disk using” drop down menu to “ISO Image” (call-out 1). Then click the image selection button (call-out 2) to find and open our customized ISO (ex.

ESXi-6.5.0-4564106-standard-customized.iso
 ).

Rufus - Start USB Build

You should now see an updated new volume label which matches the ISO image you chose. Click the Start button to begin building the bootable USB (call-out 3).

Rufus - 'menu.c32' Warning

A warning may pop up about an older version of the ‘menu.c32’ file. This file is part of Syslinux and helps make the ISO/USB bootable. If we don’t replace the older version of the file then our USB won’t boot properly, so click Yes (call-out 4).

Rufus - Data Destruction Warning

Rufus will remind you, forcefully, that you’re about to wipe out all the data on the target USB drive. You backed up anything on the flash drive that you wanted to keep, right? If you’re sure you’re ready to continue, click OK (call-out 5).

Rufus - Building the USB

The bootable USB will now be built by Rufus. Rufus basically does a bit of work to make the USB drive bootable, and then extracts the contents of the ISO file to it.

Rufus - Finished

When finished, you’ll see that the device name has changed at the top, the green progress bar is full, and the status reads “READY”. Click the Close button to exit out of Rufus (call-out 6).

Now you can eject your USB drive as it’s ready for use as a vSphere ESXi installation drive.

Install ESXi on NUC

If you’ve installed ESXi before, this should be really straight forward, and I won’t bother with all the details. Essentially you need to make sure to connect your Intel NUC to a monitor and keyboard, and that the drive that you want to install ESXi into is either installed or plugged in. Remember that in my home lab example, that I’ll be installing ESXi on a USB drive. This means that both the USB drive that we’ve built to install ESXi from and the USB drive that we’ll be installing ESXi to have to be plugged into the NUC.

As a reminder, F10 will allow you to select your boot device on the Intel NUC.

In order to get the USB Ethernet adapter to work, we need to enable the ESXi Shell, login, and run the following command:

esxcli system module set -m=vmkusb -e=FALSE

Then log out of the shell, disable it again and reboot the host.

Bigger and Better Things

Now that we have successfully installed ESXi 6.5 onto our NUC, we can begin to do some fun and interesting things. Like, say, install ESXi on some more NUCs and then create a vSAN cluster. Stay tuned.

Featured image photo by ActiveSteve

The post Install VMware ESXi 6.5 on Intel NUC (Part 2/2) appeared first on T.B.D..

]]>
https://teebeedee.org/2016/11/install-vmware-esxi-6-5-intel-nuc-part-2/feed/ 6 1279