Patching ScaleIO 2.0 VMware Hosts

Recently did an ESXi patch run, along with some BIOS and firmware updates on a ScaleIO 2.x environment (more precisely 2.0.5014.0). The environment consists of some Dell PowerEdge servers, some of which are ESXi 6.0 build 3380124, some are Linux based, non-virtualized hosts. Luckily this environment was ScaleIO 2.x, because this version has a real maintenance mode in it (1.3.x did not). This means that while I can only patch one host at a time in this layout, I can do it fairly quickly and in a controlled fashion.

ScaleIO Maintenance Mode vs. ESXi Maintenance Mode

These are, obviously, two different things. With ScaleIO maintenance mode, you can put one SDS (providing storage services) host (at least in this configuration with two MDM’s) at a time into maintenance mode, which does not have an adverse impact on the cluster. The remaining SDS will take care of operations, provided it too does not break or go down at the same time.  After you are done patching, you exit maintenance mode, which the makes sure all changes are rebuilt and synced across the cluster nodes. This takes some time depending on the amount of data involved.

ESXi maintenance mode on the other hand, deals with putting the VMware hypervisor layer into maintenance mode so you can patch and perform other operations on it with no VMs running. The order is:

  1. ScaleIO
  2. VMware ESXi

And when coming out of the maintenance break, it’s the reverse.

I left the SVM (virtual machine on the host which takes care of the different functions that the host has, technically a SLES appliance) that I was patching, but I powered it down gracefully before putting the host into maintenance mode.

So accounting for all these things, my order was:

  1. Migrate all running VMs except the SVM off of the host using vMotion
  2. When the host is empty (bar the SVM), put ScaleIO into maintenance mode
    1. This is done via the ScaleIO GUI application, on the Backend page, by right clicking on the host. I did not have to use the force option, and neither should you…
  3. Shut down the SVM via “Shut Down Guest” in vCenter
  4. Put the host into maintenance mode without moving the SVM off the host (I suppose you could move it, but I didn’t)
  5. Scan and Remediate and install other patches (I installed BIOS, iDRAC and some other various updates via iDRAC; I had set them to “Install next reboot” so they would be installed during the same reboot as ESXi does remediation)
  6. Once you are satisfied, take the host out of maintenance mode
  7. Start the SVM on that host
  8. Wait for it to boot
  9. Exit ScaleIO maintenance mode (see 2.)
  10. Check to see that rebuild goes through (ScaleIO GUI application, either the Dashboard or Backend page)
  11. Make sure all warnings and errors clear. During host remediation and patching, I had the following errors
    1. High – MDM isn’t clustered (this is because you’ve shut down one of the SVMs containing the MDM role)
    2. Medium – SDS is disconnected (for the host being remediated)
    3. Low – SDS is in maintenance mode (for the host being remediated)
  12. After the SVM starts, it should clear all but the last alert, and once you have Exited Maintenance Mode, the final alert should clear
Exiting maintenance mode in ScaleIO GUI application
Rebuilding after exiting maintenance mode in ScaleIO

(Expected) Alerts during maintenance

As mentioned, you will have alerts and warnings during this operation. I had the following:

  • First, when putting the SDS into maintenance mode in ScaleIO, one warning about SDS being in maintenance mode:
SDS still on, ESXi not in maintenance
  • After SVM is shut down and ESXi is also placed in maintenance, two more:
All three alerts after host is in maintenance and SVM has been shut down
  • Then once you have remediated and taken the host out of maintenance, and started the SVM, you’re back to one, as in the first picture.
  • When you take the SDS out of maintenance, it will clear the last alert

Note that the highest rated alert, the Critical “MDM isn’t clustered” is actually noteworthy. It means that the SDS you are taking down for maintenance had the MDM role (critical for management of ScaleIO). Normally you’d have another one, and you shouldn’t proceed with any of this if you can only find one MDM, or if you already had this (or any other alert).

EMC has this to say about MDM’s (also see the document h14036-emc-scaleio-operation-ensuring-non-disruptive-operation-upgrade.pdf):

Currently, an MDM can manage up to 1024 servers. When several MDMs are present, an SDC may be managed by several MDMs, whereas, an SDS can only belong to one MDM. ScaleIO version 2.0 and later supports five MDMs (with a minimum of three) where we define a Master, Slave and Tie-breaker MDM.

Roles / Elements in ScaleIO

You can see the installed roles in VMware in the notes field, like so:

Roles in the Notes field in VMware

Elements or roles are (may not be a complete list):

  • MASTER_MDM – Master MDM node, Meta Data Manager, enables monitoring and configuration changes
  • SLAVE_MDM – Secondary MDM node, will take over if Master is unavailable
  • SDS – Storage node, ScaleIO Data Server, provides storage services through HDD, SSD, NVMe etc.
  • SDC – ScaleIO Data Client, consumer of resources (e.g. a virtualization host)
  • RFCACHE – Read-only cache consisting of SSD or Flash
  • RMCACHE – RAM based cache
  • LIA – Light installation agent (on all nodes, creates a trust between node and Installation Manager)
  • TB – Tiebreaker, in case of conflicts inside cluster, counted as a type of MDM, non critical except in HA/conflict situations

ESXi funny business…

While running remediate on the hosts, every single one failed when installing patches.

Scary Fatal Error 15 during remediation

A very scary looking Fatal Error 15. However, there’s a KB on this here.

So, (warm) reboot the host again, wait for ESXi to load the old pre-update version, and do a re-remediate without using the Stage option first. I used stage, as I’m used to, apparently this breaks. Sometimes.

And to re-iterate, I was patching using vCenter Update Manager (or VUM) from 6.0 build 3380124 to 5050593.

Sources from (not actually for the version in use, but similar enough in this case. Use at your own risk..

ScaleIO v2.0.x User Guide.pdf contained in the above mentioned

Home Lab Xeon

The current home lab setup consists of an Intel Core i3-2100 with 16GB of DDR3, a USB drive for ESXi (on 6.5 right now) and a 3TB WD for the VMs. While the Intel i3 performs perfectly for my needs, I came across a Xeon E3-1220 (SR00F, Ivy Bridge), which should be even better!

For the specs, we have the following differences:

Model Intel Xeon E3-1220 Intel Core i3-2100
Released: Q2-2011 Q1-2011
Manufacturing process: 32nm 32nm
Price originally: 189-203 US dollars (more in euroland) 120 USD
Core count: 4 Cores 2 cores
Hyperthreading No Yes
Base Freq: 3.10 GHz 3.1 GHz
Turbo Freq: 3.40 GHz No
TDP: 80 W 65W
Max Memory: 32 GB ECC DDR3 32 GB Non-ECC DDR3
L1 Cache: 128 + 128 KB 64 + 64 KB
L2 Cache: 1 MB 512 KB
L3 Cache: 8 MB 3 MB

So we can see that the Xeon part is 4 core processor, without hyperthreading, so real cores as opposed to the i3’s threads. It’s more power hungry, which is to be expected, but can also Turbo at a higher frequency than the i3. Also, the Xeon has more cache, which is also to be expected with a server grade component.

A notable thing is that the Xeon, being a server part, does not include the GPU components, so I’ll have to add a GPU at least for the installation. I run the server headless anyway, but I want to see it POST at least. I think I’ll have to add a PCI card for this it has no PCI slots so, as I only have one PCIe slot (well there are some x1 slots but I have no such cards), and that’s used by the NIC. The motherboard is an Asrock H61M-DGS R2.0 which has one x16 slot and one x1 slot. Maybe I’ll do it all headless and hope it posts? Or take out the NIC for the installation?

Some yahoo also tried running an x16 card in an x1 slot here. Might try that but since I have to melt off one end of the x1 slot, probably not.

There are apparently some x1 graphics cards, but I don’t have one as I mentioned. An option could be the Zotac GeForce GT 710, which can be had for 60 euros as of this post.


I went to the pharmacy to get some pure isopropyl alcohol. It wasn’t on the shelf, so I had to ask for it. I told the lady I need some isopropyl alcohol, as pure as possible. She looked at me funny and said they had some in stock. I told her I’m using it to clean electronics, so she wouldn’t suspect I’m some sort of cringey soon-to-be-blind  (not sure if you get blind from this stuff, but it can’t be good for you) wannabe alcoholic, to which she replied that she doesn’t know what i’ll do with it, or how it will work for that. She got the bottle, which is described as “100 ml Isopropyl Alcohol”. There is a mention of cleaning vinyl disks and tape recorder heads on the back, so I was vindicated. There’s no indication of purity on the bottle, but the manufacturer lists above 99.8% purity here. Doesn’t exactly match the bottle, but it’s close.

Why did I get isopropyl alcohol? Well, because people on the internet said it’s good for cleaning off residual thermal paste from processors and CPU coolers. With common sense 2.0, I can also deduce that anything with a high alcoholic content will evaporate, and not leave behind anything conductive to mess things up. Oh and it cost 6,30€ at the local pharmacy. It’s not listed on the website (or it says it’s no longer a part of their selection).

Let’s see how it performs. I’m using cotton swabs, but I suppose I could use a paper towel. If it leaves behind cotton pieces, I’ll switch to something else.

The Xeon originally had a passive CPU block and a bunch of loud, small case fans, but I will use the same cooler as for the i3.

Take out the i3 and the cooler. Clean the cooler off with the isopropyl:

Isopropyl worked wonders

Put in the E3, new thermal paste. I used some trusty Arctic Silver 5.

Termal paste added, note artistic pattern

Re-attach the cooler and we’re off to the races. I’ll note here that I hate the push through and turn type attachments of the stock Intel cooler. Oh well, it’ll work.


Powering on

Powering the thing on was the exciting part. Will there be blue smoke? Will it boot headless? Will it get stuck in some POST screen and require me to press a button to move on? Maybe even go into the BIOS to save settings for the new CPU?

Strangely enough, after a while, I started getting ping replies from ESXi meaning the box had booted.

There’s really nothing left to do. ESXi 6.5 recognizes the new CPU and VMs started booting shortly after.

Xeon E3 running on ESXi 6.5

My Intel Core i5 Skylake Build

After four years on an Intel i5-2500, I decided it was time for an upgrade. Also because I want to pass down some components to the other people in the household. I bought my i5-2500 back in 2012 which is four years ago. While it still performs admirably, and the new CPU will not be significantly faster, it will sit in a motherboard with modern connectors (USB 3, 3.1, M.2. etc.), as well as bringing the memory up to DDR4. The i5-2500 wasn’t the latest processor when it was bought either. Ivy Bridge was out (the 3xxx series), but still a bit costly. This time, I plunked down for the latest generation, simply because it’s the second 14nm CPU-family from Intel the first being Broadwell; it represents the “tock” in the (now defunct?) Intel tick/tock development model. The tock stays on the same manufacturing process as the previous tick, but optimizes performance and reduces power consumption. The CPU I went with is similar to the one I have. I chose the Core i5-6600K. The main differences (other than four generations of Intel CPUs in between) is the K, signifying an unlocked CPU. While I don’t usually go for overclocking, I might want to squeeze some extra performance out of this one at some later date, seeing as my upgrades are few and far between.

As an interesting fact, this Intel Core i5-6600K cost 270€, while the i5-2500 cost a little under 200 back in 2012 (196€ I think). The next one up would have basically been a locked i7-6700, at 354€. The unlocked version of the CPU I got would have been 253€, so 17 euros less.

For the motherboard, I picked the Asus Z170 Pro Gaming. There are cheaper alternatives (B and H chipsets, starting at around 60€), but I figured, with a semi-expensive CPU, I’d better not cheap out on the motherboard. I actually bought a bundle, which contained the motherboard, and an Asus ROG Gladius mouse (which isn’t actually that bad; it costs around 60€ bought separately).

For the CPU cooler, I didn’t want to all out for a Nexus at 70-90 bucks. I instead opted for a fairly well priced Cooler Master 212 EVO. At 42€ it’s a mid range cooler, which has done fairly well in the reviews I read.

Rounding everything off, I got 16GB’s of Kingston’s HyperX Fury memory, operating at 2666 MHz. A kit of two 8 GB sticks, which set me back 83€. I could have opted for cheaper memory. Words like “Fury” or “Hyper” do not really factor into my daily usage profile. But it was a 16 GB kit which is certified compatible with the motherboard, as per Asus’ documentation. That is important to me. (The cheapest 16GB DDR4 kit/stick right now costs 64€)

Pile o' Parts
Pile o’ Parts


Installation started with a backup of my system. I use Veeam Endpoint Backup Free (v. 1.5), backing up to a 3TB NAS. In case I need a bare metal recovery, there’s an ISO file that I can burn on a disk or throw on a USB stick. Probably not but.. you can never be too sure. What’s being removed is:

  • Asus P8Z68-V Gen. 3 Motherboard
  • Intel Core i5-2500
  • 16 GB of DDR3 memory (4x4GB sticks)
  • Nexus NH-U12P (dual fan)

So I’m not touching the case (Fractal Design Define R4 Pearl Black), storage (Samsung 840 Pro 256GB SSD, 1 x WD Red 2TB drive + Intel 910 SSD PCI-E card, 400GB), PSU (Corsair 650 TX), GPU (Nvidia GTX 960) and other assorted bits and bobs.

Oh, I am getting rid of my Razer Blackwidow keyboard and my Razer Deathadder Chroma, because Razer software is shit. It annoyed me to the point of throwing out 200 € worth of Razer stuff. The mouse is already replaced, the keyboard will wait for Assembly 2016, where I will buy a Ducky. Maybe this will be another blog post later on.

I am also taking the time to clean out the case of dust and so on. A good thing to remember during the hot summer. Dust makes for bad air flow, and bad air flow makes for hot computers. And I don’t mean the sexy kind!


The case was absolutely full of dust. Luckily there are at least *a* filter in the bottom of the case, but most fans were still in pretty bad shape. I started by separating all case-to-motherboard cables, as well as psu-to-motherboard cables. After that, I removed the motherboard/cpu/memory/cooler combo. I forgot how heavy the Nexus NH-U12P was!

A quick dust-off, and the case was ready to receive the new parts.

Build.. up?

Ah! Forgot about the I/O backplate. Remove that, and insert the new one that came with the Z170 motherboard.

Check that motherboard standoffs are all in shape and tighten them.

I opted to install the CPU and cooler prior to putting the MB in the case. Socket 1151 installation was very simple with the included cpu installation tool. Snap the cpu into the plastic install tool. Put the tool with the cpu inside into the socket. Close socket latch. I was surprised that you could actually leave the tool in place, but it fits, and the instructions tell you just that.

First, attach the plate that comes behind the motherboard for cooler mounting. This wasn’t too hard, but it was nice to have an extra pair of hands to help. You have to flip the board in order to attach bolts to the other side. A handy tool is included for tightening them.

Small dab of thermal paste in the middle of the CPU (I always do it this way), and attach the cooler to the previously attached backplate. Very easy, although you do have to apply a small amount of force to get the spring-attached screws to bite properly.

Smoke test

I like to run Memtest86 for a night after a new build is done. Also a few hours of furmark / prime95 just to see that things are stable. Some people advocate even longer tests, and there might be arguments for this, but I’m content. Temps for the new build are very good, even with the budget-priced Cooler Master. I can readily recommend this combination (i5 Skylake + CM 212 EVO) based on my experiences.

Idle temps are X degrees, and during testing (say 3DMark), CPU reaches around Y degrees C.

Conclusion and final words

Performance increase isn’t really noticeable. Not that I expected it. Here are some 3dMark results comparing the previous build with the i5-2500 with ddr3 memory, and the current build i5-6600 with dd4. Most other components are the same.

3DMark Firestrike: 6401 vs 6608 (where the biggest differentiator was the physics score, in which this Skylake build scored 1000 points more than the older i5)

3DMark Sky Diver: 17444 vs 18394 (again CPU bound tests made the difference)

3DMark Cloud Gate: 15435 vs 17454

And finally just as a joke, 3DMark 11: 9250 vs 9550 (CPU again differed by about 1000 points in favor of the Skylake)

After writing this article, I’ve upgraded the BIOS twice: Once to version 1901 and then 1904. Both have been stable, with no noticeable differences. I’ve used the EZ upgrade thing in the BIOS, and it’s worked fine. It can also connect to the internet, but that requires and extra reboot so I’ve just placed the file on a disk, and then browsed to that disk in the BIOS. We’ve come a long way from booting to FreeDOS or something through a floppy or usb, and then flashing! There’s also an option to do this from Windows, but I’ve usually opted to do it in the BIOS. Just feels more safe.

Messy build is done
Sure, it’s not cable managed and there’s no color coordination. Sorry!

Oh, and also, I ended up getting a Turtle Beach Impact 500 at Assembly Summer 2016. There were no Ducky keyboards for sale (typical, they’ve been there every year..). But on the other hand, this 69€ keyboard has performed lika god damn champ! Cherry MX Blue switches, tenkeyless, with uh.. 6 KRO? Enough for my needs. Very good feel, compact, solid build and detachable cable. Based on two months of usage, get this thing if you’re looking for a cheap minimalistic mechanical keyboard!

Turtle Beach Impact 500
Turtle Beach Impact 500

All of my Razer stuff is in the garbage now. Adios!

Lenovo Thinkpad T460s First Impressions

I recently switched laptops from the T440s to the T460s. I’ve long been a fan of the Thinkpads, both during the IBM period and the Lenovo reign of late. The T440s was a bit of a mistake in my opinion. Sure it performed as you’d expect, but the mouse was a huge pile of dung, and the keyboard wasn’t nice either. My favorite is still the T410s, which had the non-chiclet keyboard, similar or same as the old IBM Thinkpads had. I had a bunch of issues with the T440s over its 2 year and some odd month lifespan. The SSD broke early on and had to be replaced. I broke the keyboard (no fault of Lenovo, but still), and one USB port is unusable (not sure why). Battery life is still good after two years of business use, and it has no technical faults other than the ones I listed. It’ll still serve as my secondary machine, and probably do so for quite some years.

Plan old packaging
Plain old packaging

I got the T460s hot off the press, just a week after release, or so. I opted for the 20F9-0043MS model which has the full-HD matte screen, 4 + 4GB of RAM (which i expanded to 20GB by switching out the sole 4GB stick for a 16GB one), Core i7-6600U processor, and so on.


First, let’s look at the hardware. We have output from CPU-Z first, showing the features of the CPU:

Detail of the main page, showing Skylake U/Y series CPU. Note the rather cool 15W TDP and 4MB L3 cache, plus the awesome 14nm manufacturing process.
Detail of memory page. Total of 20GB DDR4, 4GB internal soldered on the motherboard, + 16GB SO-DIMM
Mainboard details. Propietary Lenovo motherboard, running 1.05 BIOS (later upgraded to 1.08)
CPU-Z Cache page listing the CPU caches

Then GPU-Z, showing the integrated Intel HD Graphics 520:

GPU-Z output. Chip is Skylake GT2 from last fall


Then we move on to the SSD, which appears to be an M.2 type drive and not your standard 2.5″ SSD. I’ll get an internal picture later for you, but opening the bottom of the machine (which is much easier than in the T440s which had icky plastic tabs that were too easy to break off), shows you all the user replaceable parts, which are very easily accessible! The SSD is manufactured by Samsung, however the model seems to be something sold to OEMs (the catchy MZNLN256HCHP). Some forums speculate that it is similar to the 850 (EVO?) model, but nothing certain.

Here’s some output from SSD-Z:

Some data on the Samsung SSD. Sata-3 bus, 256GB


CrystalDiskMark 5.1.2 results for the T460s
CrystalDiskMark 5.1.2 results for the T460s

If you want to compare performance (I’m not saying Crystal Diskmark is the ultimate tool, and these are not official testing conditions, but they are .. comprable I would wager) to some select SSD:s, here’s my Intel 910’s (PCI-E card) results, and here are the Samsung 840 Pro results, the T440s results and finally the venerable T410s’ results. All results with 64-bit CrystalDiskMark version 5.1.2, default settings.

Mobile Connectivity

There’s a 4G/LTE card in this model, which is a Sierra Wireless EM7455 Qualcomm Snapdragon X7 LTE-A WWAN Modem. The fun part was taking out the SIM-caddy, which was surprisingly already occupied! There was a “Lenovo Connect” SIM-card inside. Apparently, Lenovo has partnered up with a number of carriers worldwide (115 countries according to Lenovo). But since those cost extra, and I already have such connectivity in the countries I need to travel to, I took the SIM out. You might want to have a look at it, but it looks like most packages have data caps, which I discard out of principle. The prices don’t look.. bad, I suppose. Here’s the link

As for the 4G performance, I tested it in Lapland, which has superb 4G connectivity (probably due to the low amount of subscribers per cell), it works fine without additional software in Windows 10. Speedtest gave me the following results (DNA is the carrier).

Speedtest run in April of 2016 in Finnish Lapland
Speedtest run in April of 2016 in Finnish Lapland

WiFi card is an Intel Dual Band Wireless-AC 8260, and the gigabit NIC is an Intel I219-LM. Both are bog-standard intel quality and have worked fine.

There is one thing that annoyed the piss out of me. Clicking the Notifications icon in the systray…

..this one!
..this one!

You get the otherwise handy Action Center / Notification bar thing, where you can turn off things like bluetooth, wireless, and yes, even cellular (though it is not showing here right now). Well, what happens if you turn off cellular here, and you want it back? Naturally, instinct tells you to open the action center thing again and re-enable it! But, what if it doesn’t show up (like it did for me)? What then? Well the next step is to go to Network Connections, look at the adapters and enabl… oh but wait it’s already enabled. But still it’s off, and you can’t connect? Crap!

Handy action center!
Handy action center! Not showing cellular because of reasons?

So after an unreasonable amount of googling, I found some people with similar issues. Apparently you can’t enable it anywhere in Windows proper (if you can, please tell me in the comments). No amount of enabling and disabling the card in network connections or device manager brings it back, or going to airplane mode or.. whatever. Instead what you need to do is sign out, and in the login screen, click the connectivity icon (the wireless symbol). From there, you can re-enable the radio of the WWAN card. Horse shit I say!

Clean install of Windows 10

I don’t care for manufacture-bloated OS’s, so I did a clean re-install of Windows 10 Enterprise, build 1511. Because I’m a dummy, I didn’t initially realize my mistake and attempted to install from my Easy2Boot USB drive. And that works too, if you’ve read the instructions and understand what you are doing… Here’s what I did wrong, so you don’t have to do the same things:

  1. Easy2Boot works fine, but you have to understand that if the install image is of UEFI type (which the windows image is), you can’t just copy the image to the Windows directory like other images
  2. You have to follow these instructions and make the Windows install image into an imgPTN image, and then try again.. Follow these instructions:
  3. Or, alternatively, get a suitably sized USB stick (4GB should do, 8GB will most definitely do), and use the Windows Media Creation tool (only for home and pro versions), or use Rufus but select the “GPT partition scheme for UEFI” option under ‘Partition Scheme and Target System Type’, or it won’t boot correctly. Or use the Windows 7-era tool (step 12 onwards)
  4. In my case, it did boot, but failed to find suitable devices to install to, or was lacking other drivers
  5. And no, adding SATA or other disk-related drivers during install did nothing to fix this – It’s an UEFI issue
  6. Changing BIOS settings between UEFI only, Legacy only, and Legacy first (and the CSM setting) also didn’t help in this case

After learning about UEFI stuff, installation was straightforward. The only Lenovo tool I like to install is the excellent Lenovo System Update, which keeps track of correct drivers and helper software and makes sure it is up to date. Also updates your BIOS, which is pretty useful. As of this date, BIOS 1.08 (or.. UEFI, I guess)

There’s more to write, but so far, I’m very pleased with the T460s. Much more than the 440s. The hardware is easily accessible, it’s performant and the mouse is much improved. To quote Wil Wheaton: “Later, nerds.”



MicroATX Home Server Build– Part 4

After a longish break, here’s the next installment! So the server has been in production now since last September, and is running very well. After the previous post, this is what’s happened:

  • Installed ESXi 6.0 update 1 + some post u1 patches
  • Installed three VMs: Openbsd 5.8 PF router/firewall machine, Windows Server 2016 Technical Preview to run Veeam 9 on and an Ubuntu PXE server to test out PXE deployment
  • Added a 4 port gigabit NIC that I got second hand

In this post, I’ll be writing mostly about ESXi 6.0 and how I’ve configured various things in there.

For the hypervisor, I bought a super small USB memory, specifically a Verbatim Store n’ Stay (I believe this is the model name) 8GB, which looks like a small Bluetooth dongle. It’s about as small as they get. Here’s a picture of it plugged in:

The Verbatim Store N Go plugged in
The Verbatim Store N Go plugged in

Using another USB stick created with Rufus, which had the ESXi 6u1 installation media on it, I installed ESXi on the Verbatim. Nothing worth mentioning here. Post-installation, I turned on ESXi Shell and SSH, because I like having that local console and SSH access for multiple reasons, one of them I’ll get to shortly (hint: it’s about updating).

Since I didn’t want to use the Realtek NIC on the motherboard to do anything, I used one of the ports on the 4 port card for the VMkernel management port. One of the ports I configured as internal and one as external. The external port is hooked up straight to my cable modem, and it will be passed through straight to the OpenBSD virtual machine, so it can get an address from the service provider. The cable modem is configured as a bridge.

The basic network connections therefore look like this:

Simple graph of my home network
Simple graph of my home network

After the installation, multiple ESXi patches have been released. Those can be found under, using this link: Patches for ESXi can be installed in two ways: either through vCenter  Update Manager (VUM) or by hand over ssh/local esxi shell. Since I will not be running vCenter Server, VUM is out of the question. Installing patches manually requires you to have a datastore on the ESXi server where you can store the patch while you are installing. The files are .zip files (you don’t decompress them before installation), and are usually a few hundred megabytes in size.

To install a patch, I uploaded the zip file to my datastore (in this case the 2TB internal SATA drive) and through SSH logged on to the host. From there, you just run: esxcli software vib install -d /vmfs/volumes/volumename/

Patches most often require reboots so prepare for one, but you don’t have to do it right away.

Update 2 installed on a standalone ESXi host through SSH
Update 2 installed on a standalone ESXi host through SSH

Edit: As I’m writing this, I noticed Update 2 has been released. I’ll have to install that shortly..  Here’s the KB for Update 2

A one-host environment is hardly a configuration challenge, but some of the stuff that I’ve set up includes:

  • Don’t display a warning about SSH being on (this is under Configuration -> Advanced Settings -> UserVars -> UserVars.SuppressShellWarning “1”)
  • Set hostnames, DNS, etc. under Configuration -> DNS and Routing (also made sure that the ESXi host has a proper dns A record and PTR, too; things just work better this way)
  • Set NTP server to something proper under Configuration -> Time Configuration

For the network, nothing complicated was done as mentioned earlier. The management interface is on vmnic0, vswitch 0. It has a vmkernel port which has the management ip address. You can easily share management and virtual machine networking if you want to, though that’s not a best practice. In that scenario, you would create a port group under the same vswitch, and call it something like Virtual Machine port group for instance. That port group doesn’t get an IP, it’s just a network location you can refer to when you are assigning networking for your VMs. What ever settings are on the physical port / vswitch / port group apply to VMs that have been assigned to that port group.

By the way, after the install of Update 2, I noticed something cool on the ESXi host web page:

VMware Host..client?

Hold on, this looks very familiar to the vSphere web client which has been available for vCenter since 5.1?

Very familiar!
Very familiar!

Very familiar in fact! This looks awesome! Looks like yet another piece that VMware needs to kill of the vSphere Client. Not sure I’m ready to give it up just yet, but the lack of a tool to configure a stand-alone host was one of the key pieces missing so far.

Host web client after login
Host web client after login

In the next  post I will be looking at my VMs and how I use them in my environment.

Relevant links:
The Host UI web client was previously a Fling, something you could install but that wasn’t released with ESXi
But now it’s official:

MicroATX Home Server Build – Part 3

Because I am impatient, I went ahead and got a motherboard, processor and memory. The components that I purchased were:

  • Asrock H61M-DGS R2.0 (Model: H61M R2.0/M/ASRK, Part No: 90-MXGSQ0-A0UAYZ)
  • 16 GB (2x8GB) Kingston HyperX Fury memory (DDR3, 1600MHz, HX316C10FBK2/16, individual memories are detected as: KHX1600C10D3/8G)
  • Intel i3-2100 (2 cores, with hyperthreading)

I ended up with this solution because I realized I may not have enough money to upgrade my main workstation, to get the parts from that machine into this one. I also didn’t have the funds to get a server grade processor, and getting an mATX server motherboard turned out to be difficult on short notice (did I mention I’m an impatient bastard?).

I ended up paying 48€ for the motherboard, 45€ for the processor (used, including Intel stock cooler) and 102 bucks for the 16GB memory kit.

The motherboard has the following specs:

  • 2 x DDR3 1600 MHz slots
  • 1 x PCIe 3.0 x16 slot
  • 1 x PCIe 2.0 x1 slot
  • 4 x SATA2
  • 8 USB 2.0 (4 rear, 4 front)
  • VGA and DVI outputs

The factors that led to me choosing this motherboard were mainly: Price, availability, support for 2nd and 3rd generation Intel Core processors (allowing me to use the i3 temporarily, and upgrade to the i5 later if I feel the need), and the availability of two PCIe slots. All other features were secondary or not of importance.

The reductions in spec that I had to accept were: No support for 32GB memory (as mentioned in the previous post), no integrated Intel NIC (this has crappy Realtek NIC, but I might still use that for something inconsequential as management; probably not though)

These pitfalls may or may not be corrected a later date when I have more money to put toward the build, and patience to wait for parts.

The CPU is, as mentioned, an Intel i3-2100. It’s running at 3.1 GHz, has two cores, four threads (due to HT), 3MB Intel ‘SmartCache’, and a 65W TDP.  It does support 32GB of memory on a suitable motherboard. I doubt the CPU will become a bottleneck anytime soon, even though it is low-spec (it originally retailed for ~120€ back when it was released in 2011). The applications and testing I intend to do is not CPU heavy work, and since I have four logical processors to work with in ESXi, I can spread the load out some.

Putting it all together

Adding the motherboard was fairly easy. There were some standoffs already in the case, but I had to add a few to accommodate the mATX motherboard. Plenty of space for cabling from the PSU, and I paid literally zero attention to cable management at this point. The motherboard only had two fan headers: One for the CPU fan (obviously mandatory..) and one for a case fan. I opted to hook up the rear fan (included with the case) to blow out hot air from around the CPU. I left the bottom fan in, I may hook it up later, or replace it with the 230mm fan from Bitfenix.

Initially, I did not add any hard drives. ESXi would run off a USB 2.0 memory stick (Kingston Data Traveler 4GB), and the VMs would probably run from a NAS. I ended up changing my mind (more on this in the next post). For now, I wanted to validate the components. I opted to run trusty old MemTest86+ for a day or so. Here’s the build running MemTest:

Build almost complete, running MemTest86+
Build almost complete, running MemTest86+

Looks to be working fine!

Here’s a crappy picture of the insides of the case, only covered by the HDD mounting plate:

Side panel open, showing HDD mounting plate, side of PSU
Side panel open, showing HDD mounting plate, side of PSU

One thing to note here is that if you want the side panel completely off, you need to disconnect the cables seen to the front left. These are for the power and reset buttons, USB 2.0 front ports and HDD led. They are easy to remove, so no biggie here.

One note on the motherboard: There has only ever been one release of the BIOS, version 1.10. This was installed at the factory (obviously, as there were no other versions released at the time of writing). If you do get this board, make sure you are running the latest BIOS. Check for new versions here:

So this is the current state of the build. Next up…

  • Installing ESXi 6.0U1 (just released in time for this build)
  • Deciding on where the VMs would run
  • Adding NIC and possible internal storage
  • Configuring ESXi
  • Installing guest VMs

Stay tuned!

Relevant links:

MicroATX Home Server Build – Part 2

First an editorial correction to the previous post. An Intel B85 chipset motherboard will not support my current LGA1155 socket i5 processor, because that chipset is meant for the 4th Generation stuff (i.e. Haswell). Forget I wrote that.

And meanwhile, back at the content:

The case arrived last friday, and it’s a nice one! I’ve already stripped out the stuff I don’t need (mainly the 5,25″ bay internals), and installed the Corsair VX 450W PSU I had laying around from a previous build. A few notes:

  • The PSU installation was tricky
  • There are plenty of fans included, but they can easily be replaced. I’m thinking of getting their own 230mm (!) fan for the bottom of the case, since it should be fairly quiet
  • The handles on the bottom and top are a mixed bag. They are flexible, yet solid, so I wouldn’t worry about breaking them per se. I did end up removing the bottom handles (are they still handles even though they are in the bottom?) because it felt wobbly with them. I don’t want it to sway if i touch it.
  • Plenty of slots for 2.5″ and 3.5″ HDD’s. Very nice! All with removable mounting brackets of sorts
  • The case was wider than i thought, but this isn’t a bad thing
  • Most things are toolless, but there were some (easily removable) screws for certain parts
  • A nice selection of screws, rubber grommets, standoffs and other bits and bobs were included

The PSU installation

The PSU is installed in the front of the case, but not like you would think. I am not entirely sure why they opted for this method, but there is a standard power cable running from the from the front, from under the case, to the rear of the case, to a standard power plug. This is so that you can have all cables running to the rear of the case, even though the PSU isn’t physically in the rear.

The problem is that when you mount the PSU (it’s mounted top down, instead of on its side like usual), the regular power plug which you would plug to a wall outlet, is for the internal run, that ends with a 90 degree angled plug. It was *very* hard to fit, as you can see from the pictures. If your PSU has the plug near the edge of the PSU, it might even be impossible to fit the cable.

Detail of case bottom. Note PSU placement and power cable.
Detail of case bottom. Note PSU placement and power cable. Also note the rubber feet to lift the case slightly and allow at least minimal airflow below the case

From a space utilization perspective, I see why they did this. But the practicalities are well.. not. I seriously hope I didn’t break the cable, because the fit is so tight. If i did, it’ll probably blow a fuse the minute I turn it on, since it’s it would then be in contact with the metal of the case, causing a short.

Removing the bottom handle

I’m not sure why they made the bottom handle rounded too. The top one I get. It’s ergonomic, it looks good. But the bottom? I don’t want the case to be a rocking chair. I want it to sit still on the floor, shut up, and to what it is told.

Luckily, removing the bottom handles is an easy task: Remove four screws, pull it slightly and lift it out. The result isn’t pretty, but then, this is a home server build, not a beauty pageant. Someone asked Bitfenix if they’d consider different kinds of handles, or kits to cover the void left by removing a handle. The answer at the moment seems to be no, and I understand. As they say in the post, plastic is cheap, but making new molds isn’t.

I suppose if the visuals are a dealbreaker for you, either leave the handles in, or cover it with black gaffer tape or something. You can see the end result in the pictures.

Because of airflow, and stability, I added rubber feet to the bottom of the case. They seem to work fine. Whether I need more of a gap between floor and case remains to be seen. I have bigger rubber feet, and I’ll replace them if it seems necessary.

Lower case after handle is removed.
Lower case after handle is removed.
Full side view of case after lower handle removed
Full side view of case after lower handle removed

..and for my next trick

I am currently looking for a motherboard. I’m basically down to two choices:

Intel DH61ZE –

A cheaper desktop board with the same H61 chipset: or

Price for the former is ~80€, price for the latter: 45-60€

I might not move the i5-2500 to this board after all. I’ve been looking at a used i3-2100, which has 2 cores and hyperthreading, making it nice for an ESXi box. They are priced at around 40-60 euros used.

Memory will come from an existing stash, but will be limited to 16GB due to the motherboards. Just something I’ll have to live with unless i dish out more money for a modern board, or a proper server grade board and processor.

MicroATX Home Server Build – Part 1

Today I officially started my new home server build by ordering a case. The requirements for building a new home server are the following:

  • It needs to be physically small
  • It needs to be able to operate quietly
  • It needs to utilize some current hardware to reduce cost
  • It needs to be able to run VMware ESXi 6
  • Needs to support 32GB RAM for future requirements
  • Needs to accommodate or contain at least 2 Intel Gigabit NICs

Having run a number of machine at home in the past three decades, some of these have become more or less must-haves. Others are more of a nice-to-have. I’ve had some real server hardware running at home, but most of the hand-me-down stuff has been large, powerhungry and/or loud to the point where running it has been a less than pleasurable experience.

The last candidate was an HP Proliant 350 G5 (or so?), which was otherwise nice, but too loud.

You will note that power isn’t a requirement. I don’t care, really. My monthly power bills for a 2.5 person household of 100 m^2 is in the neighborhood of a few dozen euros. I really don’t know, or care. I’m finally at a position where I can pick one expense that I don’t have to look at so closely. For me, that expense is power. Case closed.

The conditions I’ve set forth rule out using a classic desktop machine cum server thing. Those are usually not quiet, they use weird form factors for the motherboard, seldom support large amounts of RAM etc. etc. A proper modern server can be very quiet, and quite scalable as most readers will know. A new 3rd or 4th generation Xeon machine in the 2U or Tower form factor can be nigh silent when running at lower loads, and support hundreds of gigabytes of RAM. They are, however, outside my price range, and do not observe the “Needs to utilize some current hardware to reduce cost”-condition.

Astute readers will also pipe up with, “Hey, this will probably mean you won’t use ECC memory! That’s bad!”. And I’ll agree! However, ECC is not a top priority for me, as I am not running data or time sensitive applications on this machine. Data will reside elsewhere, and be backuped to yet another “elsewhere”, so even if there is a crash, with loss of data (which is still unlikely, even *with* non-ECC memory), I’ll just roll back a day or so, not losing much of anything. A motherboard supporting ECC would be nice, but definitely not a requirement.

Ruling out classic desktop workstations and expensive server builds I am left with two choices:

  1. Get a standard mATX case + motherboard
  2. Get a server grade mATX motherboard and some suitable case

The case would probably end up being around the same choice, as the only criteria is that it is small, and can accommodate fans that are quiet (meaning non-small fans). The motherboard presents a bigger question, and is one that I have yet to solve.

I could either go with a Supermicro, setting me back between 200-400 €, and get a nice server grade board, possibly with an integrated intel nic, out of band management etc., or I could go with a desktop motherboard that just happens to support 32GB of memory. There are such motherboards around for less than 100€ (For instance, Intel B85 chipset motherboards from many vendors).

Here’s the tricky part: I could utilize my current i5-2500 (Socket LGA1155) in this build, and associated memory. This would mean that the motherboard would obviously need to support that socket. Note! The 1155 socket is not the current Intel socket. We’re now at generation 6 (Skylake), which uses an altogether different socket (Socket 1151), which is not compatible with generations 2&3 (which used 1155), generation 4&5 (which used 1150).

Using my current processor would save some money. Granted, I’d have to upgrade the machine currently running that processor (meaning a motherboard, cpu and memory upgrade, probably to Haswell or Broadwell, i.e. Socket 1150), meaning the cost would be transferred there. But then again, I tend to run the most modern hardware on my main workstation, as it’s the one I use as my daily driver. The server has usually been re-purposed older hardware.

Case selection

I’ve basically decided on the form factor, which will be micro ATX (or mATX or µATX or whatever), so I can go ahead an buy a case. Out of some options, I picked something that is fairly spacey inside, and somewhat pretty on the outside, which doesn’t cost over 100€. The choice I ended up with was the Bitfenix Prodigy mATX Black.

Here’s the case, picture from Bitfenix (all rights belong to them etc.):


Some features include:

  • mATX or mITX form factor
  • 2 internal 3.5″ slots
  • Suitable for a standard PS2 standard ATX PSU (which I happen to have lying around)
  • Not garish or ugly by my standards

I ordered the case today from CDON, who had it for 78,95€ + shipping (which was 4,90€). Delivery will happen in the next few days.

The current working idea is to get an mATX motherboard which supports my i5-2500 and 32GB of DDR3 memory. I’ve been looking at some boards from Gigabyte, Asrock and MSI. MSI is pretty much out, just because I’ve had a lot of bad experience with their kit in the past. May be totally unjustified, but that’s the way it feels right now.

I haven’t still ruled out getting a Supermicro board, something like this one: but that would rule out using my current CPU and memory. I’d have to get a new CPU, which, looking at the spec, would either be a Xeon E3 or a 2nd or 3rd generation i3 (as i5’s and i7’s are for some reason not supported). i3 would probably do well, but I would take a substantial CPU performance hit going from Xeon or i5 down to i3. I’d lose 2 cores at least, which are nice to have in a virtualized environment, such as this.

Getting the board would set me back about 250€ and the CPU, even if I got it used would probably be around 100€. Compare this against an 80-100€ desktop motherboard, use existing CPU, existing memory (maybe?). Then again, I’ll have to upgrade my main workstation if I steal the CPU from there. Oh well. More thinking is in order, me thinks.


Last minute edit:

The hardware I have at my disposal is as follows:

  • Intel NICs in the PCI form factor
  • Some quad-NIC thing, non intel, PCIe
  • Corsair ATX power supply
  • Various fans
  • If I cannibalize my main rig:
    • i5-2500
    • 16GB DDR3 memory (4x4GB)

Windows 10 Experiences

Prep work

Every single blog probably has a post like this, but I figured it’d be good to recount my Windows 10 experiences. For posterity reasons, if nothing else.

I was involved in the Windows Insider program for quite some time (since the 9000-series builds), and have run Windows 10 pretty happily in a number of physical and virtual machines. Among them, VMware Workstation 11, Virtualbox 4, and a Thinkpad T420s. All without major issues, even when it was still in the preview stage.

Updating my own workstation is another issue entirely, but I figured I would do it anyway, and fix any issues that might come up as they hit.

I started off performing a standalone full backup using Veeam Endpoint to an external USB drive, and moving the Veeam recovery media to that same external disk. This is a good practice in case everything blows up in your face. Using Veeam Endpoint, I could perform a bare metal recovery in the event of a total disaster, and return to my pre-upgrade state.

The plan was as follows: Update Windows 7 to Windows 10, wipe install and do a clean Windows 10 install. The reason behind this? During the upgrade phase, your Windows 7 (or I suppose 8/8.1) product key is converted to a Windows 10 key, and paired with some kind of hardware id, identifying your computer. One could try and install Windows 10 directly, and use the common key that seems to be the same on all machines that do the 7,8,8.1 -> 10  upgrade (for the Pro version, it’s: VK7JG-NPHTM-C97JM-9MPGT-3V66T), but they have reported that the install fails. This is probably because there is some backend magic that happens during the upgrade, which ties your computer to Windows 10.

So I started off getting the Windows 10 media using the Microsoft Windows Media Creation tool. I also saved the ISO to a USB drive where I could perform the full install later from. Some people have reported that starting the upgrade from the install media has been more successful than the “Windows Update” method. If you want to force your upgrade the Windows Update way, you can do the following:

  • Remove all files from the folder: ”WindowsSoftwareDistributionDownload”
  • Remove the folder ”$Windows.~BT” from the root of your system drive
  • Start an administrative command prompt and run ”wuauclt.exe /updatenow”
  • Open and run Windows Update from the control panel

The Upgrade

I however opted for the install media method which seemed to work fine. I mounted the ISO (using WinCDEmu if you want to know), and started setup.exe and followed the upgrade wizard. Everything proceeded basically without incident; except for a weird Razer Synapse install popup during the upgrade:

win10_razerKind of weird, and also tells me that explorer.exe is running somewhere in the background there (I thought it was basically in a “pre-windows” environment where it performs the upgrade before it starts any more advanced GUI elements). I was unable to install Razer Synapse (a program I had installed in Windows 7, which was therefore going over to the new Windows 10 world); it crashed with some error. I dismissed the window. It didn’t appear to bother the upgrade in any way. But funny none the less!

After the upgrade, I had a basically working Windows 10 environment with all of my Windows 7 software etc. Nvidia drivers were installed as part of the upgrade and they were of the correct version (which supports Windows 10). Nvidia’s own little control panel did offer me an upgrade to the same version, but was unable to install it. Somehow it didn’t detect that Windows had already installed the same version. I didn’t troubleshoot this further, as everything was working and I was going to do the clean install anyway. Razer Synapse also worked, but also didn’t detect that it was already installed and insistently popped up the same install wizard as in the picture above, but failed with an error. It’s already installed! Give up! 🙂

N.B. Do not proceed unless Windows tells you it is activated. You can also check your upgraded Windows 10 key using a tool like Magic Jelly Bean Keyfinder (or some other method you prefer)

The Clean Install

I wanted a completely clean environment, as I’ve had bad experiences with Windows upgrades since the 3.1 -> Windows 95 upgrade. Just trust me.

I had a bootable USB with the Windows 10 x64 Pro installation media on it. I was prepared to re-install all applications etc. And I had a backup of everything just in case. Boot the machine, perform a clean install from the USB drive. Enter the product key starting with VK7JG during installation, no issues here. Install went without incident. It might not even ask you for a key, apparently, since it was activated after the upgrade.

After install, I had one device with missing drivers (Asus Xonar DG soundcard); everything else worked “out-of-the-box”. Installed a bunch of my favorite programs, and so far, a week or so after upgrade, I still have not had any major issues.

Now, what I did do is disable all forms of tracking and “send information to microsoft”-type of settings. I’ll do another post on this. Basically, it seems to be really hard to get rid of everything tracking related, because some of the call home functions are hard coded and IP based, so a simple host-file block won’t work. You need to deal with it on a firewall level, but even then, some users are reporting funny issues with their computer when it can’t call home. Which is sad. But then again, the EULA probably states you don’t actually own Windows 10 or have any rights to it, and the upgrade is free, so whatever. Take my first born.


Among others..

Build 10240: Did you get assigned a license/product key? from Windows10


Some of the privacy related stuff:   <—- Note that this looks very shady, I would take it with a metric fuck-ton of salt

Bare Metal Recovery Experiences with Veeam Endpoint BETA

Note! This article describes a product that wasn’t released yet, so things might have changed from this to the release version! Some of the screenshots are from the release version of the Recovery Media

Note! Some of the images are from a different run than described, so ignore possible inconsistencies.

A prospective customer was having some issues when they were trying out Veeam Endpoint Free (while it was in beta), specifically bare metal recoveries. Not having tried it, I decided to give it a go to see where they might have gone astray. Here are some notes from the road.

Let’s start out with my environment:

  • Lenovo Thinkpad T440s running Windows 8.1, 256GB SSD drive
  • Veeam Endpoint Backup version (not the release version)
  • Backups are running to a server running Veeam 8 with the pre-release Patch 2 which allows Endpoint backups to a Veeam Backup Repository
  • Laptop and server are not on the same subnet/VLAN but traffic is allowed between the two
  • Target laptop is a Thinkpad R61 (just the empty first machine I saw without an owner in sight :)). Machine has an empty 320 GB spinny disk
  • Backup job is set to “Entire Computer”

Nothing exotic regarding the job, it takes everything on the machine except for deleted, temporary and page files, allowing for a complete restore of the computer to a given state.

To enable the bare metal recovery, create the Recovery media when prompted during install. Note that you can also skip this step and create it later, but I suggest doing it now. I chose to make it an ISO file, and then burned that onto a CD. I suppose you could use a USB drive as well, but I didn’t test it. The image in my case was about 480 megabytes in size, and was named VeeamRecoveryMedia_HOSTNAME.iso. When creating the recovery media, I left the default checkbox for hardware drivers checked, and did not add any additional drivers for this exercise.

After the backup was done, booted up the Thinkpad R51 from the recovery cd. The process was fairly straightforward from then on. Also noteworthy is that I didn’t even expect this to work, since I’m restoring to a completely different generation and model series of Thinkpad with completely different hardware. Windows usually throws a hissy fit if you change the direction of the wind, or the moon is at an odd phase, but to my utter amazement, this actually worked. Not sure whether I should thank Veeam or Microsoft Windows 8.1 for this one 🙂

Starting off, this is the first thing you see when you boot from the recovery media:

First screen in the recovery media
First screen in the recovery media

We can start using different tools (familiar to those that have used Windows PE type disks before), or to start the Bare Metal Recovery Process. Screenshots taken from a restore I did in Virtualbox to avoid potato-quality pictures.

In the second screen, we have to choose where our backup files reside: Either a local storage medium (USB disk, other hard drive etc.) or a network storage location:

Chose where your backups are located
Chose where your backups are located

I chose network storage, since my backups are located on a Veeam BRS server. After this, we may have to tell it some network settings in order to access the network. You can use either wired or Wireless connection. You can also specify drivers in case you have more exotic hardware that isn’t detected by the boot disk.

Network settings dialog
Network settings dialog

After this, we select whether we want to use a network share, or a BRS server:


Give the name or ip of the BRS server, and credentials. On the server side, you can set which credentials have access to which repositories, so make sure these are in order. On the next pages , you can choose the machine and restore point:

Veeam Server Credentials
Veeam Server Credentials
Select the computer from the job
Select the computer from the job
Select restore point
Select restore point

So at this point we have chosen what, and when we are going to restore. Now we continue by telling it how we want our disk layout in the backup to look on our target machine (which may have a different sized disk, for instance). Maybe we don’t want or need to restore every partition? I went with Manual restore (advanced) for more fine grained control.

What to restore?
What to restore?
Chose the disks that we want to restore from our backup
Chose the disks that we want to restore from our backup

In my example, I want a full working replica of my original machine:  hence I will select all OS drives. In my case this means the System Reserved partition that later Windows’ boxes create to store certain boot files, and the C drive. Note the partition sizes. Also note the ‘Customize disk mapping’ link in the lower right hand corner. There, we could configure a different layout than our original. The default is noted in the ‘Restore layout’ column, ‘Automatic’. This will keep the original layout if possible.

We can now see a summary of what we are about to do. We then start the process:

..the process has started
..the process has started

Despite the scary warning (which may or may not be related to this being a beta at the time of my test), the restore process was completed. Note how it updates the BCD (bootcode) so we can boot our newly restored system. It also does some magic with drivers, which might be why it booted on a completely different laptop (T440s vs. R61).

...and completed
…and completed

We can now hit finish, remove the boot media when instructed and boot to our restored system. As I mentioned, everything worked, and was exactly as I could have hoped! I will do an update on this article when I’ve had a chance to try the release version (build available since April 14, 2015, see

The finished product!
The finished product!