ESXi 5.0 AMD Whitebox Server for $500 with Passthrough (IOMMU)

Gigabyte GA-970A-UD3 Motherboard w/IOMMU

Gigabyte GA-970A-UD3 Motherboard w/IOMMU

One of the things that I looked for in building my ESXi Home Lab was that I wanted enough hardware at a decently cheap price, to get get practice with the advanced features such as vMotion, Fault Tolerance, and Networking, but I also wanted a functional lab where I could do things like virtualize my NAS and HTPC for the house. In fact, I ended up virtualizing all but a single PC in the house (both my wife and my son use a virtual computer as their everyday desktop), but that’s for another post.

One of the nice things with ESXi 5.0 is that it added support, out of the box, for many of the Realtek 81xx series LAN chipset that you find on consumer motherboards now days. That means that I can easily get 4 Gigabit NICs on my ESXi whitebox for about $7-$10 a NIC. Absolutely amazing.

Consumer AMD Motherboards with IOMMU Support

Personally, I have tried and have had success with two different AMD motherboards that have IOMMU and allow it in ESXi 5. That is the ASRock 970 Extreme3, and the Gigabyte GA-970A-UD3. Both of these are ATX motherboards, which is important for me, because I’m concerned with having multiple PCI-Express slots for passing graphics cards through. The Gigabyte Gigabyte GA-970A-UD3 ended up being the choice for me in the long run simple because it has more slots: a PCi-e x16, x4, three x1s, and two PCI slots. This allowed me a lot of flexibility.

In the end, I have three of these boxes for a total of 24 cores and 96GB of RAM. Two of them have dual graphic cards, and those pass graphics cards through to VMs that have physical keyboards and monitors. My wife and son use two for everyday graphic use and gaming, while two of them serve as HTPCs in the house. The other box has a x4 PCI-e 8-port RAID card that is passed through to a VM that serves as a NAS for the house (the motherboard SATA ports are also passed through for a total of thirteen 2TB drives).

For the desktops, I also pass USB through to them, and use a USB over CAT6 extender to run the USB signal up to a 7 port powered USB hub which runs the keyboard, mouse, and gives them more USB ports. The video comes by way of HDMI over CAT6. Sound comes from a USB soundcard w/5.1 audio to speakers. My son games with titles like World of Warcraft and Battlefield 2 with no issues.

ESXi 5.0 AMD Whitebox Build with IOMMU

Please note that these are the prices as of February 2013, however, much of this can be picked up off eBay at considerable savings. Also, I list what I have in what slots, although there are obviously a couple of different options you can go with. The HD6670 and HD6450 both come with low profile brackets. The HD6670 is a gaming card, and handles just about everything you can throw at it. The HD6450 runs my HTPCs (XBMC) w/1080p flawlessly. These are diskless nodes also, booting off USB.

A final note here: I am running the second video card, the HD6450 in a PCI-e x16 slot only running at x4.  However, plenty of benchmarks have shown that performance only takes about a 5% hit, and that’s with higher end cards. I wouldn’t be surprised in the 6450 isn’t even feeling a hit.

  • Motherboard: Gigabyte GA-970A-UD3 — $85
  • CPU: AMD FX-8120 Zambezi 3.1GHz Socket AM3+ 125W Eight-Core — $120
  • RAM: 32GB (4x8GB) DDR3-1333 — $120
  • Video Card: HIS Radeon HD6670 PCI-e x16 Low-Profile — $90
  • Video Card: Sapphire Radeon HD6450 PCI-e x16 Low Profile – $40
  • Case: ARK IPC-2U2055PS 2U Rackmount w/420W Power Supply — $100
  • GB NICs, PCI-e: Generic NIC off eBay using the Realtek 81xx chipset — $7
  • GB NICs, PCI: Generic NICs off eBay using the Realtek 81xx chipset — $7×2=$14

ESXi 5.0 Whitebox Slot Population

  • PCI-e x16: Radeon HD6670
  • PCI-e x4 : Radeon HD6450
  • PCI-e x1 : USB Card
  • PCI-e x1 : USB Card
  • PCI-e x1 : GB NIC
  • PCI : GB NIC
  • PCI : GB NIC

Total Cost per ESXi 5.0 AMD Whitebox: $536

A picture of the finished build on one of the boxes, sans the video cards.

ESXi 5.0 AMD Whitebox Server for $500 with IOUMMU

$500 ESXi Whitebox with IOUMMU

Any questions or comments? Leave them below!

  • Marcus Werme

    Interesting build! Most home labs for vSphere are built to just play around with vSphere HA, DRS, FT etc, yours is a bit different. What kind of USB and HDMI extender products are you using for this? Do you need to route multiple wires to the endpoints or one CAT6 cable is enough?

    • vintagedon

      You’re correct that most builds are just for HA, DRS, and so on, and I have all of this (I’m working on a post about the total lab now, as I’ve made a few corrections and even virtualized a house PC), but mine definitely had a different slant. As I had to research all this myself, I thought I’d try to make this available in one single spot.

      As for the USB extenders, a single CAT6 cable has proven to be enough for each extender. The USB over CAT6, I found I can terminate with a powered USB hub and it works fine. HDMI works great over CAT6 including audio. See my answer to cw823 below for the exact models I use.

  • cw823

    Just ordered some of the above, but I’m curious on adapters for both HDMI over cat6 and USB over cat6….

    • vintagedon

      Currently, I’m using the following models:
      Tripp Lite B202-150 USB over CAT5 Extender:
      Sanoxy HDMI over CAT5 Extender:

      Although there are other models, these worked for me and were simply plug and play. See my comment reply to Marcus above for more info. A single CAT6 run per adapter worked, and I was able to use a powered USB hub at the end of the USB for keyboard, mouse, HTPC remote, and so on. My maximum runs in the house (it could go farther, this is just the longest runs I need) are 75′. I get full 1080p video on the HDMI with audio.

  • cw823

    Awesome information. I showed up here wondering what AMD motherboard had working iommu for pass-through. Currently I have an openindiana NAS and a couple XMBC workstations, and what you’ve done here with USB and HDMI over cat6 is brilliant.

    Ten thousand thanks.

    • vintagedon

      Thanks much; there was a lot of information, little of it practical and even less with known results and specific builds, so I thought I’d save someone the same initial headaches and research I went through. Also, this afternoon I’ll be posting a second build that I’ve found works flawlessly with the ASRock 970 Extreme3 and a low-cost LSI SATA card, that’s allowed me to pass 14TB of SATA drives (7x2TB) through to a VM for use as a virtualized NAS. That build is $434 sans the drives and case and is still 8 cores, 32GB of RAM, and 4 Gigabit NICs (add the 2U rackmount case w/420w power supply for $100 more).

  • cw823

    Currently I have an i7 build as an ESXI server hosting a couple AD servers and openindiana NAS. I’m out of PCI express slots, even the x1 variety so I was looking to build something else. Then I come across this and realize I can do SO much more with another ESX box and save money/power in the long run. Keep the info coming!

    • vintagedon

      Agree completely with your conclusion here. And although I do love i7 builds, the performance vs. price point of building an 8-core AMD node is just unbeatable. Will make build post #2 here in a bit. Also, I’m considering adding a small forum onto the site for discussions, as I’d love to see others’ builds also.

  • Beantown12

    Are you concerned about running a high TDP chip like the 8120 24×7? Have you checked your power consumption and electricity cost? I’m looking at putting together a home server right now and I keep going back and forth between the 8350 and the Xeon 1230v2 primarily based on cost/value of the 8350 vs. reduced power consumption of the Xeon. Trying to decide if the reduced power justifies the added cost…

    • No, I’m not concerned at all. Remember that the vast majority of the time, the cluster runs at idle. VMWare is very good with DRS about keeping the CPU usage spread out over the cores, especially using resource pools. My ESXi nodes have only a motherboard, CPU, green drives/SSDs, GB LAN cards, PCI-e USB Cards, and basic video cards for my HTPC VMs. 8120s idle at around 100w, and you’re only talking incidental draw after that … green drives are <10w, my D-Link 24 port switches in green mode draw ~9w, SSDs are almost negligible, and so on. So you can maybe say 125W per node. My whole setup, including my 24 port GB switches (in green mode, these D-Link switches draw 9w), a dozen green drives, my firewalls, and so on, according to my APC UPS is drawing <400w at idle. My electricity bill has went up around $25 a month. That's perfectly acceptable to me.

      As for the ECC, I haven't had a single memory or disk error on any node or any VM in over a year, and I keep regular backups, both on-site and off. I see no reason to spend the extra cost myself. Remember that my goal here was to build the lowest cost to performance ratio ESXi nodes that gave good performance at low energy usage. Not to build a Enterprise level solution. I work as a VM Administrator for a Enterprise level hosting company, and do that there 😀

      • Beantown12

        Thanks Don. Did you just use the stock cooler for the 8120 or did you buy a new one? I think I’m leaning towards the 8350 or the 6300 now…

        • The stock cooler was enough, and I’ve been getting great heat offload out of them. I’ve been using an iStar 2U case (link below), and putting a couple of Vantec Tornado 80mm fans in them. This case also has a grill on the top cover where the power supply fits (it takes a normal ATX power supply) to allow it pull air. You can either get low-profile cards, or I find that you can get a Dremel and easily bend the bracket at the right height, cut the excess, and drill a small hole for the screw to hold the bracket. Link to the case:

          As another note, I have the 8350 in my personal desktop, and it’s a great chip. Less power usage and a great performance boost over the 8120. Mine runs 19C on idle.

          • Beantown12

            Thanks Don. Very helpful. I actually just picked up a Fractal case on sale as the first piece of my build (link below). Reviews say it cools well and runs quiet. If you had to build the same ESXi box today, would you still go with either the Gigabyte or ASRock motherboards?


  • crnhusker

    How would this work for 5.1?

    • The hardware itself would work great; I’ve run this on both 5.0 and 5.1. However, please be aware that it’s been well documented that 5.1 broke a good bit of PCI Passthrough, and although the hardware may work fine, actual passthrough may have a number of issues due to the 5.1 updates. There are a number of good threads on the VMWare Community boards; here is one you can review

  • Greg Mead

    Very good stuff – the whole site. Thanks so much for taking the time to share it all.

    I read your About Me page, and I think we might be twins from different parents. I got into this rat-race at the same age as you, and woke many a morning with VIC-20 keyboard imprints on my cheek or forehead. Learned programming by writing games for it, and a succession of other hardware. I’m also an avid maker, and my lifelong ADHD fueled journey has me currently chasing my personal VM empire.

    I have a couple of E3-1230 based boxes with 32GB, and some stray hosts on hardware ranging from stray DTPC hardware to a DL360 and DL380 that are a few generations old, but with wicked fast disk arrays and power consumption to match. I had a couple of old Cisco 3750G’s around, so they comprise my backbone, and I have a Synology 1812+ for iSCSI.

    I read your Home Lab Specs page with interest, but did have one question: How are you handling VMware licensing? Your Lab Host 1 obviously can’t run ESXi hypervisor free version, so I’m assuming you went the official license route? If so, I was interested in what you elected to go with. For more advanced features, I’ve been doing embedded ESXi hosts on eval licenses and blowing them away as the licenses expire. I would much rather have the physical hosts on real licenses that would let me play with vMotion, etc, but VMware is sadly lacking in a home lab license structure. Given that you are similarly out of control, I was wondering how you were addressing it? 🙂


    • Greg: Thanks for the compliments, and it does sound like we’re eerily similar. Been there done that on the power drain, and I’m definitely trying to keep my builds power efficient nowdays.
      As for the licensing, I work for an enterprise level cloud computing host, and they were nice enough to give me a half a dozen vSphere Enterprise Plus and vCenter Server Standard licenses for personal use. If they had not done that, I’d probably be doing something close to what you’re doing, or, I’d be using OpenStack, which I’m very fond of.

  • Aaron Romine

    Don – Thanks so much for this. Newegg ended up running a special a few days back on the GA-970A-UD3 motherboard + FX 6300 the other day and I jumped on it + 32 GB of ram (about $200 – I’m sure I’ll regret if prices drop this fall). The build went flawlessly, and I ended up successfully passing through a cheap PCI gigabit NIC i had and it worked (ESXi 5.1 latest update seems to work fine)! This build seems pretty flawless – save for one detail. Did you have any luck or an approach to getting CPU temperature or any other sensors in ESXi Health? I can see only basic info, no sensor data in the console. I ended up using a spare heatsink from my old Phenom II X6 (a lot more copper, the 6300’s heatsink was pitiful) + silver thermal compounds, and while the BIOS made me believe that heatwise I was ok, I’d like to monitor under load for safety. Otherwise I might just do a USB bootable BartPE environment for stress testing/bios updates/monitoring to verify the new build. Thanks!

    • Unfortunately, no, I haven’t gotten any of the sensor data to work, and that may be the only drawback of this build. For me, I don’t push this lab, even with my son’s gaming VM being on this rig, hard enough to need to monitor the sensors, so I’ve never noted it. Thanks for the reminder, and I’ll make sure and put a note on this somewhere in the article.
      Glad to see that the build worked out great for you. That’s where I get my joy from. Happy to see people getting some use out of my notes.

      • Aaron Romine

        Yeah I really appreciated these notes. I was considering one of the HP micro-servers since people had such good luck with those, but I was worried that performance would be pitiful. Your article helped push me to the AM3 platform. 32GB of RAM later I have a home lab that should be relevant for years. I may eventually migrate my EX485 WHS over to the platform. Thanks again!

  • CraziFuzzy

    I’ve got this board (v1.2) and was never able to get ESXi 5.1 to recognize the IOMMU capability – gigabyte support was of no use either. Is there a limited selection of CPU’s that support IOMMU? i was under the impression that all Phenom’s support it, but I have in it currently a Phenom II X4 965. The IOMMU setting is enabled in the BIOS. Did this board take any special tricks to get working with passthrough?

    • ESXi 5.1 has some verified IOUMMU issues, and everyone that I know doing hardware passthrough has stepped back to ESXi 5.0. It is my understanding also that all of the Phenom CPUs are IOMMU capable, so my assumption that this point would be your ESXi 5.1 as your problem. As a test, I backed up one of my USB sticks that I boot ESXi off of, updated it from 5.0 Update 2 –> 5.1, and it broke every bit of passthrough hardware I had.

  • Filipe

    What do You think about this build?

    CPU i5 4570
    MOBO GigaByte GA-Z87M-D3H
    Case. NOX NX-1 Evo
    PSU LC Power 550W 6550GP V2.2 120mm
    RAM 2 x Corsair Vengeance 8GB DDR3 1600 PC3-12800
    HDD 2 x Seagate 1TB Barracuda 7200rpm 64MB SATA 6.0Gb/s


    • This is a solid build, although I’m an AMD fan myself for virtualization (Intel in my desktops, though), but that’s personal preference. You’d get around the same performance, just double the cores, if you built it out using the AMD 8350 (see for a comparison), but that may not be your goal, and you’re going to have some caveats like higher energy use. The 2x8GB sticks are a great choice, as you don’t take the cheap way out with 4GB sticks, allowing yourself room for expansion to 32GB if you need to. Overall, I’d say this looks great, though if Intel is your choice.

  • Clint

    Great blog – been thinking about doing this for home lab for ages and just found your blog today!

    Question – If your using a mobo without integrated graphics, can you passthrough the only PCI-e card you have installed, leaving 0 card for the host or does the host need a card regardless (meaning installing a second cheapo that’ll never do anything?) ?

    • You can theoretically passthrough the PCI-e video card after you’ve done the initial setup, although I use a cheap, ATI Rage PCI video card just to have output for the local terminal in case I need it. The ESXi node will use the video card during the boot process until it gets to the point where it passes through, and if you have a monitor hooked up, it will appear as though the screen has frozen. It has simply been passed over. So it’s your choice.

  • Matt Simpson

    I’m building something similar, thanks for sharing the hardware specs. So far I have: Cooler Master HAF 912 case, 970a-ud3 mobo. I’m probably going with a FX-8350. Question, what RAM are you using and home much was it? I have a Micro Center nearby and was planning on picking up the corsair vengeance 2x8gb sticks and upgrade to 32gb later down the line. That RAM is about $150-160 for 2x8gb, just curious what you used. Thanks!

    • I’m using Corsair Ballistix as well as some Vengeance, and have had issues with neither one performance or error-wise. In fact, I use 32GB of Vengeance in my desktop and haven’t had a single bit of trouble with it, and I push my desktop pretty hard daily as a Cloud Admin who works from home (3 x 27″ screens, 30+ windows open, 60%+ RAM usage). Going with 8GB sticks, as you intend to, is the best option right now while RAM is so expensive.

      • Matt Simpson

        Awesome, thank you! And yeah I figured I would build it smart and go for the 2x8GB kit so I can expand to 32GB when needed.

        It is pretty interesting how you have yours setup. My wife is a huge world of warcraft nut, so I’m kind of interested in putting a good GPU in the host and pass it through to a Win7 VM to see how playable it is for her. How are your wife and son connecting to the VM’s? Do they use the vSphere client or rdp?

        I also thought about installing ESXi to a USB flash drive, and install Win 7 separately on a 128gb SSD or something. So if she ever wants to game, or if I ever do(not much of a gamer) we could unplug the flash drive that has ESXi on it and boot to the SSD/Win7 and use it as a physical PC. When she’s done gaming, unplug the SSD and put the flash drive back in, boot to ESXi and back in business.

        Consider me a regular here, I love reading your stuff.

        • Actually, I came up with a novel approach, and they don’t connect using either RDP or vSphere. Once I passthrough the GPU, I run HDMI over CAT6 from the GPU (so HDMI –> CAT6 –> HDMI) and have a wall outlet with a HDMI keystone jack. The monitor plugs straight into that via an HDMI cable. Then, I pass a USB card to the host, and run USB over CAT6 to the same wall plate with a USB keystone jack. A USB cable plugs into that and then to a USB hub. From there, the keyboard, mouse, and so on plugs in. Viola … instant *true* PC output straight from the video card and input to the USB card. Works like a charm. My son plays demanding games like CoD, Battlefield 3, etc, without a hitch.
          Right now, I’m working on two different articles that detail this, and a new 32TB SAN I just built. Just been busy at work. Lots more to come over the next month.

          • Matt Simpson

            Ahhh I see! That is very slick! I need to get this host built and start playing with things. Currently staying with my parents, so I don’t want to invest a lot of time and money into pulling cable throughout the house. When I get my own house though, oh boy.. the possibilities are endless as you have shown here.

            I might actually experiment with what you’ve done and setup a VM to see if the wife can game on it. Our PCs are pretty close to each other, so this should be doable. She does these huge raids in WOW that can be quite laggy at times. I think with 4 cores, 6-8gb of RAM set on the VM, and a decent GPU – she should have no issues with it. The integrated graphics on her motherboard is the bottleneck now, but instead of spending the money to upgrade her PSU/GPU, I might be better off beefing up the host and give it a shot. I just don’t want to get a huge GPU that could block any ports on the motherboard.

            If your card does well for you, I might just look at getting one of those.

          • The HIS Radeon HD 6670 1GB card would drive WoW just fine, and is a decently priced card that comes with a low-profile bracket to swap out just in case you want to run it in a 2U case The iStar 2U supports a full size PSU and is a great space saver

          • Jon

            I just found this blog and it’s a great read!
            As of now I have an ESXi server running pfSense, unRAID for storage, Server 2012 / Exchange 2013 server, Ubuntu server for SABnzbd and so on.
            I have never thought about using passthrough on GPU for gaming and I’m really interested in building a new rack server for doing just that. I’m really looking forward reading your articles. 🙂

  • Masuta

    Hi, Thanks for the great write up. I’m unable to source the UD3 in Australia, but I can get the 970A-D3 and 970A-D3P. Would either of these boards be suitable for an ESXi build with passthrough?

    • Masuta: I’m unable to find any reliable information on IOMMU for either of these boards, although I would strongly suspect with their chipsets, they would do fine. Are you able to source the ASRock 970 Extreme3 where you’re at? This is another verified board with IOMMU support.

      • Masuta

        Thanks for that Don. The ASRock board is available so I will just grab one of those ! 🙂

        • Brian

          Masuta, how did the D3P board work for you?

      • David

        I’m trying to order the ASRock 970 Extreme3 motherboard, but find that there are several updated versions.
        ASRock 970 Extreme3
        ASRock 970 Extreme3 with Bios updates
        ASRock 970 Extreme3 R2.0
        and there is an ASRock 970 Extreme4 now.
        Which would be best to get and will the other’s work as well?

  • Michael

    How noisy is that box? Can it be used in living room?

    • No more noisy than my desktop. The 120mm fans are quiet, as a rule, and hard drives, as a rule, aren’t that noisy, even though there are a number of them in this box. As a side note, if I was going to do this over, I would purchase the Rosewill RSV-L4500 4U case over the Logitech (the case I used for my new SAN build). The extra $25 is well worth it in build and layout, and the 120mm fans it in are also quiet.

  • Sam Brown

    search ebay for L5639 – deal of the century as far as servers (DL180 G6) for storage with enough cpu to back it up. 1 CPU and 4 SSD working hard with a 10gbe nic about 110 watts. Two CPU with 12 drives about 240-300 watt with two 10gbe nic’s. Dual 460w is plenty to power all of the above minus the Quadro. Probably step up to 750w if you ran a quadro for vSVGA.

    • RadarG

      So your buying the server off of ebay to part them out?

  • Jamille

    I don’t usually leave comments, but I ended up using the exact same motherboard and it rocks! I also intended to use it as a NAS, HTPC and Workstation + Router.

    I bought the GA-970A-UD3 + X8350 and 32GB GSkill ripjaws 1866 running at 1600)

    I split the passthrough devices as follows:

    2 SATA ports are used for a DVD and SSD which hosts main VM’s for
    DD-WRT/Xpenology/Vcenter Server Appliance/ set to autoboot

    Other 4 SATA ports passed through to an Xpenology VM with HDD smart enabled and working great!

    PCI DVB-S2 and Radeon 5850 in pci x4 passed through to the XBMCBuntu HTPC

    PCI-E x16 and PCIE x1 slots are hosting intel pro 1000 PT Quad + Atheros AR9000 for dual band wifi 3 antennas in DD-WRT as my main router the onboard realtek gigabit nic is connected to my cable modem now.

    (had to disable acs checking in vmkernel boot software options and use the x16 slot as the x4 slot is shared under the soutbridge and plugging the nic in there would disable passthrough in both 5.0 and 5.1 build 914586.)

    Rest of the VM’s run off of the internal NAS NFS shares for my VCP/MCITP certs.

    However I’m looking to replace the motherboard with something that has atleast 3x pcie x16 slots not shared by southbridge/pci slots. Any recommendations please let me know.

    • WT

      Hi Jamille,

      Are you about to pass through USB keyboard and mice? I am using the same mobo, but so far can get it working.

      Many thanks.


  • Jason

    How do you configure your Windows 7 VMs so that the vmware video adapter isn’t an issue after passing through a video card. When i disable the vmware svga driver then reboot, the display won’t come back on to the monitor unless i re-enable the svga adapter. Am i missing something?

  • Dave Unit

    Hi, I have just put this together and I have passed through the onboard SATA to a VM. But when I power it on, ESXi crashes. I was wondering if you could let me know how you have the SATA configured in the BIOS or if you came across this issue?

    • senses3

      You may have to enable AHCI for your onboard SATA controller in the BIOS.

      Onboard SATA is buggy when using anything beyond a normal datastore. His
      box is setup diskless and the VMs are hosted on his SAN so he does not have to configure any SATA devices. If you are planning on passing through any disks, I recommend looking into a PCIe SAS/SATA controller, I suggest anything with an LSI chipset like the Dell 6i PERCs that you can flash other LSI firmware on to.

      It will make your life much easier.

    • My apologies for the late reply; I seem to have missed this comment until senses3 replied. Are you using the exact same motherboard I listed in this post? Also, I agree with senses3 below, to a large extent, and all but one of my boxes are diskless. That one ESXi box, however, I have passed through all of the SATA controllers from the board, as well as passed through a cheap LSI RAID card set to passthrough on the card, so it’s simply a controller, and not a RAID card, and this has worked fine for quite a while. It functioned for a long time as my media storage until I moved to a dedicated solution for storage.

      However, I do have my BIOS set to AHCI, which ESXi tolerates much better. You can expect some speed hits, also, passing the controllers through, as ESXi has to negotiate this link.

      Can you provide some information on the motherboard you’re using, the version of ESXi, and what error you’re getting?

      • Dave Unit

        Thank you both for your comments. In the end I found out that ESXi 5.1 doesn’t like pass through all that much. Quite a few people suffered issues after upgrading. There is a patch to fix it but I just reinstalled 5.0. I have an LSI card with two mirrored disks hosting the VMs and I have passed through the 6 onboard SATA to a server for storage. It works quite well for what I need. Had an issue with the VM seeing a RAID configured on the onboard controller until I found the right chip set drivers. Its brilliant for the price. The only downside I have found with the pass through is the PCI cards are dependant on a single bridge, meaning you can only pass through all cards to one VM, you can’t split them. I think the PCIe may be on separate bridge though which makes it more diverse. I have the same Gigabyte mobo as in this post btw. Thanks again, great post!

        • Dave: very true about 5.1; I have tried to avoid it like the plague if I was planning on doing any type of passthrough at all. The problems seem to be spotty, but why take the chance? Glad that the build worked out for you, and you found some use from my ramblings on the site. Good virtualizing to you!

  • Shane O’Neill

    Hey Don, been going over your blog entries on these two ESXi boxes, and am seriously considering doing virtualization at home, both for optimizing CPU usage to corresponding power draw, and just for the fact to learn something new that seems pretty damn cool.

    I have a million questions, and would love to have an offline conversation on your setup and what I am thinking of doing.

    A couple of pertinent “quick” questions on this setup:

    – Are you using the stock CPU cooler in this build? How are your temps looking under load
    – You had stated that your family is using virtualized accounts to do their computing needs. Do you have thin clients set up in your house, or are they accessing the server on a full-blown PC using a VMware product? I would like to know more about this, as I would like to set up some low power terminals in my shop, garage, and hobby room to get to data/files/etc easily & at a low price point.
    – I see you use ATI cards. Any weird “issues” supporting them? I have read Linux can have some struggles with ATI cards.

    Like I said, a million questions. If you did an “ESXi primer for dummies” or “ESXi 101” it would be stellar!

    • Shane, sure, we could have an offline at any time. I’ll be adding a contact me form this week, as I’ve had a few requests for that, and you can send me some contact info through that. Will get it added probably tomorrow.

      Stock Cooler: Yes, I am. AMD CPUs have a great track record of low temps. The six core CPU I use in the SAN I built barely gets above 10C-15C (I run CoreTemp, turn on logging and parse that with a script to deliver the results to my Zabbix box to track temp). The eight core boxes, even under load, rarely spike above 35C.

      Virtualized Accounts: Actually, I run HDMI and USB over CAT6, and simply terminate to a monitor and a powered USB hub that runs a keyboard, mouse, USB sound card, etc. This way, I don’t add the extra power draw of a thin client to my load. Read more about that under ““Physicalizing” VMs as Desktops” on my ESXi Lab Specs page.

      ATI: No weird issues at all. In fact, you can’t really pass through nVidia cards, and that works for me since I’m a ATI guy myself (yay for Eyefinity).

      ESXi Primer: Am working on a series of posts for that now that take you from installing ESXi to virtualizing your first machines.

      • Shane O’Neill

        Sounds great Don. Looking forward to our conversations. It’s one thing to have an understanding of the concept of virtualization, but implementation (actual “methodology”) seems pretty thin outside of the IT guru “inner circle”.

        Being a hobbyist looking from the outside in, you guys understand what you are doing, but a lot of the “how to” doesn’t seem to filter out to us “non-IT” hobbyists. And, quite frankly, VMware isn’t very good at explaining EXACTLY how their stuff works (more “overview”, less “soup to nuts”), and makes a LOT of assumptions that you know things that aren’t necessarily apparent.

        A VMware “Bible” would be a great read

  • Nikolas Weber

    You had any issue with ESXi 5.5?

    • None as of yet. Both of my builds have been tested on ESXi 5.5 without issue.

  • Michael

    Don, is there any consumer grade motherboards AMD/Intel that have full sensor and health satus support in ESXi? I am contemplating buying the Asus P9D-M motherboard, E1270 V3 Xeon and a 290x radeon for my virtualisation/gaming fad. Do you foresee any issues with that setup?

  • Chris Beasley

    Wanted to mention that IOMMU support is broken in v1 GA-970A-UD3s. It appears that the northbridge IOPIC is a bit buggered causes trouble with the southbridge one. I’ve tried multiple hypervisors (Xen, Xenserver and ESXi) and they all report problems (although ESXi doesn’t, something crash after a few hours due to interrupt issues). Gigabyte have a beta bios (F8f) that disables the northbridge one and that appears to solve the issue, especially PCI-e tuner cards.
    The bios isn’t on the website and has to be requested, but again this appears to be V1 motherboards only!

  • RadarG

    but can your son play starcraft 2 on his dual monitors? Does each host have an ATI video card in it? I take you cant vmotion their desktops. Still very cool

    • Radar: Yes, he can play Starcraft 2 and any other modern games, and has full access to Eyefinity. Each host has at least two video cards in it, since the video cards have to be devoted to particular VMs.
      And yes, you lose vMotion capability any time you passthrough hardware to a machine, since vMotioning it off the host would orphan any hardware. ESXi disables the vMotion capability.

      • RadarG

        tempting… I was wanting one box to rule them all. I however would have to run 2 copies of Starcraft 2 at the same time. I bet it can be done though.

    • RadarG

      I wonder if you could run two copies of the game at once? If I could build a rig that could run three copies of the game at once. I think that would be a great uber-geek project for me.

  • Franko

    Don, brilliant stuff on your site. Just wondered have you submitted power readings in watts anywhere for your servers? As a reference a fairly modern HP desktop PC I was using as a server was drawing approximately 60 watts. I can find more details about that for comparison later if needed.

  • Nikolas Weber

    You had any problem with gigabyte motherboard + sas card?! Regards!

  • rdm1776

    Very interesting. I am very new to this kind of thing, but it’s exactly what I am looking for. The additional video cards are for the virtual PC for your wife and son, but not absolutely necessary for a production server that runs say, asterisk, Windows 2000 server, and possibly 2 or 3 other VMs, right, or would you dedicate h/watch for these too? Also, separately, I thought running NAS as a VM was not a great ideas (same box as personal pc’s?), help me out on that one. Lastly, what are the configurations for your wife and son? If they are all USB, are the directly connected to the server? Thanks

  • Joe__Schmoe

    I’m hoping somebody is still reading this and might be able to help. I’ve got a similar setup to what you describe here, but am having trouble audio on the VM. The VM (running Windows 7 with VMware Tools installed) doesn’t see an audio playback device. Since I want to use this to run XBMC, I really need audio. Do I need an additional sound card? If so, how do I set it up so the audio goes over the HDMI along with the video? I’m going to be connecting this to a TV via HDMI so I would prefer to have a single cable for the audio and video.

    Any help or suggestions would be greatly appreciated.


  • vSnowDay

    Hoping someone can chime in here. I’m looking to pick up the P version of this board, “GA-970A-UD3P” – I can’t think of why it wouldn’t work, but can anyone confirm? I’m after the elusive all-in-one ESXi host, GPU passthrough for desktop hyper-convergence config. I have a Phenom II x4 now, plan to start out with that and jump to a hexa core later. Thanks!

    • This board should work without issue. Unless you’re hooked on Gigabyte, I will say that I’ve become convinced that ASRock rules the roost for home virtualization. Their consumer motherboards will also run Xeon chips, and every one I have tried, even if it doesn’t state it does, supports ECC RAM. Plus, they all work with hardware passthrough. Anyway … the P version should work fine.

      • vSnowDay

        Thanks Don. Are you saying your build #2 mobo “970 EXTREME3” supports ECC? What about the CPU in that instance? I’m wondering if my planned FX-6300 would work with ECC memory. I have some spare HP 2x8GB low-voltage PC3L-10600R that I’d love to be able to use, I don’t see my work load being intensive, but I may opt for the 95W version of the FX-8300, it looks to be about $50 more.

      • rdatoc

        I was looking for a replacement MB for my ancient AMD AM2 NAS server and came across your article. I usually go for ASUS since they explicitly advertise ECC support for almost all of their AMD MBs (sadly, AMD left it out of FM* CPUs). By coincidence, MicroCenter had the ASRock 970 Extreme3 for $59.99 after rebate so I grabbed one. An AMD II X4 605e (45w) CPU and 2x8GB Kingston ECC DIMMs round out my build. After plugging the CPU and memory and firing up the UEFI screen, I have searched in vain for the ECC options. Booting up with Memtest+ shows ECC as disabled. I’m presuming that the BIOS and/or the MB revision may have something to do with ECC settings being displayed or not. Or I just might be having one of those senior moments. In any case, can you tell me what MB revision and BIOS version you have for your Extreme3? And maybe a screenshot of the ECC options in the UEFI (which would probably be asking a lot since that would require a reboot)

      • I agree Asrock boards support passthrough very reliably.
        Also 1 Asus I found that works

  • Jebediah Cole

    Hey Don,
    Fantastic builds, very innovative and great documentation – your site has been a lifesaver as well as an inspiration for me getting started with home virtualisation.

    I was wondering what USB pci-e cards you are using for passthrough? I am reluctant to buy USB 3.0 cards as I’m worried about ESXi compatibility.

  • Shaibu Ali

    Hi Don, I really like the info on your builds. I’m a bit lost on how your wife and son connect to their VMs….remotely over the lan or directly on the ESX host?

  • Mike McAteer

    Hi Don. I’ve just recently found your blog, and it has been such a big help in getting some things straightened out in my own setups. I was curious, for the HDMI and USB over Cat 6 extenders, are those running through a switch, or are the cables direct run from one room to another?