ESXi 5.0 AMD Whitebox Server for $500 with Passthrough (IOMMU), Build #2

ASRock 970 Extreme 3

ASRock 970 Extreme 3: ESXi Whitebox w/IOMMU

As I spoke about in my first post, ESXi 5.0 AMD Whitebox Server for $500 with Passthrough (IOMMU), my ESXi Home Lab is a bit different than most in that I’m not only looking for a lab, but also a production environment to run virtualized HTPCs (XBMC in a VM), and even virtualized PCs for the house. All hardware had to be consumer, but it also had to support ESXi HA (high availability), ESXi DRS (Distributed Resource Scheduler, also known as clustering), ESXi FT (fault tolerance), and other advanced ESXi features for my home VM lab.

In my first post, I gave one of my builds for a node, however, I’ve also been working with another motherboard, the ASRock 970 Extreme3, that I’ve come to like also. It loses a PCI-e x1 slot over the Gigabyte GA-990FXA-UD3 but I have found I have good luck grabbing this off eBay at a cheaper price. It also gives my ESXi home lab IOMMU (AMD’s version of passthrough), but the total build comes in a little cheaper. I’ve also found a set of hardware, including cards, that I can replicate this build, down to the passthrough each time, that works without any additional configuration or headaches. I imagine there are builds that also work, but I find this build is reproducible for me with no additional issues. Plug and play.

This ESXi whitebox with IOMMU works out to around $534 (or $459 without my 2U rackmount case) currently if you headhunt parts on eBay. The ASRock 970 Extreme3 is easy to find for ~$75, and it supports the FX-6100/FX-6200 processors and FX-8120/FX-8150 8-core processors (Zambezi), as well as the FX-6200 and FX-8320/FX-8350 8-core processors (Vishera ) without a BIOS flash. It’s been plug and go with all three of the motherboards I have purchased. That wins points with me.

ESXi SATA Passthrough on the ASRock 970 Extreme3

One nice quirk that I have found with the ASRock 970 Extreme 3 is that when you pass through the SATA controller, only 4 of the 5 ports on the board passthrough. The 5th SATA port remains available to the ESXi host for use as a local data store. For me, this is a boon rather than a drawback, since VMs that you passthrough hardware to are not available for VMMotion. So why not store them locally and have ESXi boot off the hard drive? I do have an iSCSI target available for the cluster, also.

ESXi Video Card Passthrough on the ASRock 970 Extreme3

Video card passthrough in ESXi can be tricky, but there are some best practices that I’ve come up with for the person looking to do so. First of all, your RAM *must* be reserved for your VM if you’re passing through a video card. This is a non-arguable point. The way that ESXi handles RAM, it moves it around and re-assigns it at times, and the video card will die first time this happens. Reserve your RAM, and you’ll have no stability issues. See my post about ESXi Video Card Passthrough (coming soon) for more information and configuration thoughts.

ESXi AMD Whitebox with IOMMU Parts List

Motherboard: ASRock 970 Extreme3 — $75
CPU: AMD FX-8120 Zambezi 3.1GHz Socket AM3+ 125W Eight-Core — $120
RAM: 32GB (4x8GB) DDR3-1333 — $120
Video Card: HIS Radeon HD6670 PCI-e x16 Low-Profile — $90
ESXi Host Video Card: ATI Rage XL Pro 8BM PCI — $8
Case: iStar D Value D-213-MATX 2U Rackmount — $75
Power Supply: Logisys PS550E12BK 550W Power Supply — $25
: 2xPCI-e GB, 1xPCI GB — $21 ($7 each)

Optional SATA Controller Card: LSI SAS3041E 4-Port SAS/SATA PCI-e x4 — $25

ESXi Whitebox Total Cost without Case: $459
ESXi Whitebox Total Cost w/Case:$534 ($559 w/SATA Card)

Slot Setup for the ESXi AMD Whitebox

PCI-e x16: Radeon HD6670 (Passthrough to VM)
PCI-e x4 : LSI SAS3041E 4-Port SAS/SATA PCI-e x4 (Passthrough to VM)
PCI-e x1 : 5 Port PCI-E USB Port (for Passthrough)
PCI-e x1 : GB NIC (RealTek 8168, used by ESXi host)
PCI : Intel Pro/1000 MT Dual Gigabit PCI-X NIC
PCI : ATI Rage XL Pro 8BM PCI Video Card (Console Video)

Notes: All of this was off eBay, including shipping. Although some deals I had to wait on, I replicated this down to the dollar on two ESXi hosts, and the parts are plentiful. The case, of course, is totally up to you, and this could be done cheaper with a mid-tower box. I’m rack mounting, so I was looking for a solid case.

April 2013 Note: As of the time of this note, RAM has almost doubled in price from when I wrote this article, and most 32GB (4x8GB) kits are running $175-$225.  It may be smarter at this point to purchase two 8GB sticks, and expand out later when the price for RAM decreases.

Also, note that the PCI-e x4 slot is a SATA controller, that’s completely optional. This particular node is sitting in a 4U Logisys CS4802 case with 8 SATA drives in with 7 of them passed through to a VM (remember that one of the SATA ports on the ASRock 970 Extreme3 stays with the ESXi host) that is serving as a domain controller and NAS for the house using FlexRAID on the SATA drives to get a 11TB single volume RAID that stores the house’s media, documents, photos, and roaming profiles on. This setup is also working perfectly and was plug-and-play. I also used a video card in this slot and was able to pass this through to a VM without issue.

To get the extra hard drives in without issue, I used a Cooler Master STB-3T4-E3-GP 4 in 3 5.25″ hard drive bay that allowed me to take the three 5.25″ external bays and turn them into 4 bays for 3.5″ hard drives. Worked perfectly, has a 120mm fan built in to keep them cool, and no issues to date with it.

Finally, I want four physical NICs on every node to properly segregate traffic for my VM Lab.  I *could* use a dual GB NIC card, but those take up a PCI-e x4 slot at minimum, and I’d lose my SATA controller.  On my second node of this type, I didn’t need that SATA card, so that slot is free for another video card, or possibly a dual GB NIC card, freeing up one of the PCI-e x1 slots for a USB card to passthrough.  You can also use a Intel Pro/1000 PCI-X dual GB NIC in a regular PCI slot, and these can be found on eBay for <$10.  See my article Intel Pro/1000 Dual Gigabit NIC PCI-X Card in PCI Slot for more information and pictures.

The ATI Rage card is kept assigned to the ESXi host as it’s video. Once the node boots, you lose console video output if you don’t have a card devoted specifically to to the host, so I keep this cheap $8 card for ESXi host video locally.

ESXi AMD Whitebox Screenshots

Here are some various screenshots from this build just to give you an inside view of stuff going on. Some of these are of FlexRAID, so you can see the passthrough drives mounted in a RAID and presented as a single drive. This really helps with the XBMC shares that I present out to my HTPCs.

ESXi NAS Server: FlexRAID Volume

ESXi NAS VM: FlexRAID Volume


ESXi NAS, Disk Manager View

ESXi NAS VM, Disk Manager View








ESXi NAS VM, Device Manager

ESXi NAS VM, Device Manager


ESXi NAS VM, SBS 2011 Dashboard

ESXi NAS VM, SBS 2011 Dashboard


ESXi NAS VM, Host Passthrough

ESXi NAS VM, Host Passthrough


ESXi NAS VM, Host Passthrough Popup

ESXi NAS VM, Host Passthrough Popup


ESXi NAS VM, ESXi Host Overview

ESXi NAS VM, ESXi Host Overview 


ESXi AMD Whitebox Server, Hardware View

ESXi AMD Whitebox Server, Hardware View


ESXi AMD Whitebox Server, Hardware View, Labeled

ESXi AMD Whitebox Server, Hardware View, Labeled


ESXi AMD Whitebox Server, Hardware View, From Rear

ESXi AMD Whitebox Server, Hardware View, From Rear


ESXi AMD Whitebox Server, Motherboard Close-up

ESXi AMD Whitebox Server, Motherboard Close-up


ESXi AMD Whitebox Server, Motherboard Close-up #2

ESXi AMD Whitebox Server, Motherboard Close-up #2


  • cw823

    How exactly does the ESX HA now work between your hosts?

  • I still have two physical machines in the house that stay on 24×7. My own personal desktop (i7, 32GB of RAM), and a bedroom PC that’s a small, mini-bookcase system. I’m running Starwind iSCSI on both of these with a 128GB SSD on each. The free edition of Starwind will let you do a 128GB (max) HA iSCSI target on two nodes. If you’ve got a single node, it will let you do an unlimited iSCSI target. Great software, easy to use, and a great company. Will be providing an article on setting that up soon.

    Also, Starwind allows you to pick a specific NIC on the host you run it on, so I installed a extra GB NIC in each of these, and dedicate that NIC to nothing but iSCSI traffic, which goes into a Smart Switch and is segregated to a VLAN for storage traffic. It uses very little CPU usage, and I rarely know it’s there. Normal iSCSI and NFS traffic is on a Synology DS211J.

  • wavejumper

    Are you using pciHole.start and pciHole.end in your .vmx file? That can help you get past 2GB (at least for a Windows 7 VM with GPU passthrough – not sure about small business server 2011 though). The values are dependent upon how much ram your video card has. I have a couple of boxes working with Win7 as VM with video cards passed through. One has a 1GB video card and requires start/end values of 1200/2200. The other has a 512MB video card and requires start/end values of 1200/1700. Hope it helps.

    • Sorry for the late reply, but no, I had not tried that until this weekend, and with those settings, I pushed past the 2GB pretty easily. My next project is to pass through my Radeon HD7870 and virtualize my gaming desktop.

      • Panagiotis Stamatopoulos

        A question here guys. Why I cannot get more than 1.17GB of memory on my W7/32bit VM? I’ve already tried several installations (by setting pciHole start/end settings) but always the same outcome: (1.17GB) usable. I haven’t tried 64bit though …

  • Panagiotis Stamatopoulos

    Great info Vintageon! Te strange thing here, is how you passthrough only 4 ports of the on-board SATA controller, leaving the fifth one to host your ESXi installation. There must be something in your BIOS settings to allow this (perhaps one port configured as eSATA?). Which SATA port have you connected to your boot ESXi drive (1 or 5)? Were you able to test this on the 970A-UD3 as well?

    • You’re exactly right … the 5th port, I found out, can be an eSATA port, and has a switch in BIOS, so it’s obviously controlled separately. Although I found that out completely by accident, it’s an awesome “feature”. I have not had a chance to test this on the 970A-UD3 yet, but I’ll try to do some testing this weekend. I’m bringing a physical box online for vCenter (I currently run it as a VM, but dislike that) and as an iSCSI node, so I’ll have an article on that soon. I’ll test the 970 then, update the article, and let you know via a comment here.

      • Panagiotis Stamatopoulos

        Could’ wait and tested it on 970A-UD3! It is supported as well by setting the following in BIOS:
        * OnChip SATA Type (AMD SB950 South Bridge, SATA3 0~SATA3 3 connectors) -> AHCI
        * OnChip SATA Port4/5 Type (AMD SB950 South Bridge, SATA3 4/SATA3 5 connectors) -> IDE

        By that way, you can have SATA ports 4/5 for use with ESXi, and passtrough remaining 0-3 SATA ports (AHCI mode) to a VM. It looks like that the Gigabyte 970A-UD3 has an advantage here over the 970 Asrock, as apart from te +1 PCIex, you get +1 Sata port for the ESXI (for use wit Datastore or CD/DVD drive).
        Your blog gave me the idea to build my own VMWARE server and replace my 3 separate PC’s (HTPC/W7/Unraid). Keep up the good work!

        • Exceptional work! If you don’t mind, I’ll add this to the article on the 970, with attribution, of course. I’ve always believed the 970A-UD3 had an advantage due to the extra PCI-e slot, but this definitely shoots it above the AsRock. The primary reason I’ve liked the ASRock is that I’ve been able to find them at ~$70 prices on eBay, and since I had 3 nodes, the extra PCI-e wasn’t needed. However, the 970A-UD3 makes a strong showing here.

          • Panagiotis Stamatopoulos

            Keep in mind that the 970A-UD3 is flashed with a beta bios. A key point here is the revision of the board. In Gigabyte site there is F8a beta bios for boards 1.0/1.1/1.2, whereas for rev 3.0 board there is no beta bios – only official FC version (which I am not sure if it supports IOMMU properly). In general, I prefer Asrock because it has way-better support for IOMMU. Most (if not all) Asrock AMD 9series boards probably support IOMMU (passthrough) and they do it well. In addition, I’ve checked with Asrock site and saw that it has started supporting IOMMU in FM2 socket as well (beta bios of course). But that shows that Asrock is way ahead of others in IOMMU support on their affordable – low cost boards. And if this works [properly] on FM2 there is an advantage having the on-board graphics there – but of course limited to a max of a 4-core CPU.

            P.S. I don’t mind at all if you add this info to your blog …

  • john

    What version of Exi do you used for iommu. Can you do iommu in the free version?

    • Currently, I use 5.0. It’s been well-documented that PCI Passthrough has been broken in 5.1, so I’d recommend staying away from it and sticking with 5.1. The free version will do PCI Passthrough, yes. You simply won’t have access to some of the more Enterprise level features like DRS, Fault Tolerance, and so on, but if you’re just looking for a single node box to run PCI Passthrough on, then you can stick w/the free version.

  • David Thompson

    My last attempt at posting had a glitch so I don’t know if I’m posting a second time. I wanted to find out if you are still running ESXi 5 or have you and will your equipment work with 5.1?
    I’m currently a Network Engineer and worked for five years as a System Admin. With how I see things going I want to dive into the Virtual World and build a pair of Whitbox ESXi 5.1 servers.
    I’ve been researching equipment and wanted to stay with the Intell i7, but your article showed me how much money I can save by going with the equipment you’ve selected.
    The local electronics store here in San Diego doesn’t have the ASRock 970 Extreme3 motherboard so I’ll have to order it from online. I appreciate all of the detail you’ve put in your posts.

    • You were probably writing this while I was replying to John’s question, very similar to yours, below. It’s been fairly well documented that 5.1 broke a lot of stuff with PCI Passthrough. All of my builds are fully compatible with 5.1, but if you’re looking to do PCI Passthrough, then stick with 5.0. That said, if that’s not your focus (my builds are really specialized to what I’m doing), then 5.1 is a go.

      And thanks for the compliments; I’m a systems administrator myself, and when I set out to do this, there was very little solid information about 100% working builds with desktop parts. I figured this might save someone the many hours I spent working these out.

      As for the ASRock 970, don’t forget eBay. I can usually find them for $70-$90 easily there. And I agree on Intel vs. AMD for this. I am not a fanboy of either, but I when I can build a COMPLETE 8-core, 32GB node for just $70 more than an i7 costs alone … yeah, that’s my choice.

      • David Thompson

        In your response to John below you said “The free version will do PCI Passthrough, yes. You simply won’t have access to some of the more Enterprise level features like DRS, Fault Tolerance”.
        Eventually I want to train, be certified and start working in a position using Vsphere 5/5.1. How do people learn these other functions if they aren’t available on the free version?

        • They will normally use the trial version. You get 60 days of full functionality (no difference from the full Enterprise version), and that includes DRS, Fault Tolerance, and so on. Two to three runs of the trial version is usually enough to train for certification. There is nothing to keep you from reinstalling the trial version after your 60 days runs out.

          • David Thompson

            Great News! I was reading several posts where people were talking that the free version had some limitations and what I mentioned above came straight frorm one of those posts.
            With the low cost of AMD and the availability of the 8 core processor your configuration is the best all-around hardware bundle for my needs.
            What are you using if any for DRS, Fault Tolerance and shared storage?

            I appreciate the effort you’ve done with your website as I’m sure most do. When I got into IT I had two six foot racks full with Server Admin and Cisco equipment. When I ran it all at once I would brown out the power in the house some times. I even ran a remote power strip to where I could turn my lab on at home and access it from work for testing.

            I’ve got your website as a favorites and will read the rest of the articles you have available.

          • Thanks for all the compliments, Dave. As for me, my work was kind enough to donate three Enterprise level keys for use in my lab, so I have full DRS, Fault Tolerance, and so on. If you don’t have those, unfortunately, you’ll lose those once your free trial is out. A lot of people will back up their configuration, and re-install ESXi every 60 days to stay in the trial version. I’m sure somewhere that’s against the EULA, but I’m pretty certain we all violate a ton of EULA stuff every day 😀

            And yeah, been there done that on the big racks and brown outs. Technology has come a LONG way when 24 port GB switches only pull 9w.

  • daninfamous

    I’m pricing a esxi whitebox, I havnt purchased memory in a while, has it really doubled in price or are you finding steals on ebay for 32gb around ~120 dollars?

    Also I dont plan on virtualizing a lot so im going with a fx 6100 to save some $$, see any issues with that?

    • Memory has definitely jumped up in price, and I haven’t found any $120-$150 32GB sets in about six months. The cheapest I see them running for on eBay is ~$170. A link to a set for that price is here: They key to eBay is patience (and I’m quite patient), and I use saved searches to check each day for newly listed buy-it-nows, and that’s where I usually find my good deals, or I use a eBay sniper service like (free).

      An FX-6100 would be a great chip. You get six cores at a very decent price point. However, consider this: you can pick one up new for around $115 at Amazon @ Amazon’s listing has the 8120 used for ~$132 @ I am not at ALL against used chips, especially through Amazon, who’s A-Z 30 day guarantee applies even on used stuff bought through their website from third parties. Of course, the FX-6100 listing has a used for $107 direct from Amazon warehouse deals.

      All of my processors came from Amazon’s used listings, as well as my 2-bay IOMega NAS that I use for one of my iSCSI nodes for $63 … can’t beat it.

  • Ken

    Quick question. Does your LSI SAS3041E controller support >2TB drives?

    • No, it does not. According to LSI support, LSI 3Gb/s HBA’s only support 2TB drives. That includes the following Model numbers: LSI SAS 3801E, LSI SAS 3801X, LSI SAS 3442E-R, LSI SAS 3442X-R, LSI SAS 3081E-R, LSI SAS 3041E-R, LSI SAS 3080X-R, LSI SAS 3041X-R

  • Karl

    Hello mate, awesome set up.

    I wonder if you could advise me on something however as I see you are running a 550W PSU with a fair bit of hardware.

    My set up –

    AS ROCK Pro4 Motherboard
    Intel Core i5 2500 Processor
    24GB XMS Corsair Memory
    3ware 9650se 8 Port RAID controller powering 4 x Laptop HDD’s in RAID 10 + BBU
    4 X 1TB SATA2 HDD’s for Storage in Raw Device Mapping configuration
    2 X Additional GB NICS

    Corsair 500W Builder Series CX 80+ Bronze ATX Power Supply

    I want to add my Radeon HD 4800 Gfx card to the system for passthrough but worried that with all this hardware, a 500w PSU wont be enough. I wont be doing any intensive gaming but will try PCOIP with VMware View with a game or 2, just to see how it runs. Do you think I’ll be ok with my 500W PSU or should I buy a new one?

    Thanks for reading.


    • Contrary to what logic would dictate, drives actually don’t draw that much power *once they are running*. It’s the startup that kills you. With the 9650se, you can set the drives to staggered spin up, so that they don’t all draw power at once. The 4800 series draws around 110w at maximum, but even so, by my calculations, you’re still coming in around 400w. I think if you properly staggered your hard drive spin ups you’d be fine. If you wanted a bit of headroom, then I’d say no higher than a 650W. A good site for calculations like this is

  • rdb

    Great setup and write-up! I was inspired by your article and built my 2nd ESXI server,same mobo(970 extreme 3),same video card (HIS HD6670) 2gb model instead of 1gb model and single NIC,and without the LSI sas, and RIVA 128 4mb (Diamond Viper 330) for host and the only problem I’ve ran into is the Video Passthrough for the Radeon card. I read your post about the ESXi Video Card Passthrough and saw that you may have some additional info as I saw you had (coming soon) more information and configuration thoughts. Please let me know if you have some info/tips! Thanks!!!

    • marcusmarcus2

      I too am looking forward to this “coming soon” post.

      I have started to consolidate all my systems down to 1 system to save on power. I have a Desktop and a XBMC system that I want to “physicalize” (as you might say). My current physical XBMC system already has HDMI and USB over CAT6 run so to keep the system in the cooler basement so I already have the cables setup done. I have a couple older ATI PCI-E cards that I have been testing with passthrough and getting blue screens on my windows VM. I want to make sure I can get video passthrough working before I buy a new video card. I noticed that in your “ESXi 5 Home Lab Specs” post that you are running ESXi 5.0 and have seen a few post about trouble with pasthrough in 5.1. I am going to try and downgrade to ESXi 5.0 and see if that clears up my trouble. Have you tried to physicalize any systems in 5.1 or 5.5?
      Really looking forward to your post on video card passthrough. Maybe you could do a step-by-step walkthrough on how to physicalize a vm including video card and USB passthrough.

      Thanks for taking time doing this blog! I really appreciate all the information.

      • marcusmarcus2

        Read some more of the comments here. Think 5.1 is my trouble. Will go to 5.0 and test.

        Thanks again for the awesome blog!

  • Ponzi

    Hey you got me looking into upgrading my server now. I hope you still check this so you could perhaps give me some guidence. Heres my current setup.

    AsRock 960GM/U3S3
    Amd Phenom II 965 Black Edition
    16GB 1600Mhz
    400w Psu
    3 x 3TB WD Green Drives
    1 x 2.5TB (Segate drive i believe)
    1TB Drive(7200rpm)
    500GB drive(7200rpm)

    So my main use for this server is i have one machine running windows 7 64bit which pools the 3TB drives and the 2.5TB drives together using a program called drive bender. The Vm also contains utorrent which is always downloading files. Then I have another VM with Plex media server on windows 7 64bit. Last Vm contains Windows 7 64bit again but this is just like a remote work station. Incase im at work or something i can remote in and use it since my main computer stays off. Then the 1TB drive and the 500GB drive are used to hold the VMs. Over kill in space i know but have had time to move drives around. So the problem is that Plex media server for one using ALOT of cpu power as does Drive bender.

    So here are my thoughts.
    First is to buy a NIC card thinking it will speed up transfers from the server to desktop because right now all 3 VM’s go through the 1 NiC on the mother board.(Am i correct in assuming this)

    Second is to upgrade the processor from the 965 to FX8350 with the Asrock 970 you mentioned. Now since right now the ram prices are insane i’m thinking about holding off on updating the Ram and just keeping the current motherboard with the FX8350. Will I get the iommu pass through this way or does motherboard not allow? Will this processor be worth the upgrade or barely noticeable? I something think that part of my problem is not using a 10k or SSD for the VM’s. When I say slow i mean the OS on the VM become unresponsive. Now it could be ram,cpu or hard drive.

    Im also thinking about getting the Sata card because i have a small 60GB ssd that i could use as a datastore to put some Vm’s to maybe make them faster.

    Let me know what you think and if im making the right choices.

    Just a little side note

    Thank you!

    • So I am very familiar with Drive Bender, and used it for a year or so until I recently built my own 32TB SAN with a hardware RAID card (article on this coming next week!). You’re right that the majority of your power is eaten by DB and Plex: Drive Bender can definitely eat some CPU when you’re really pushing the drives.

      My thoughts: Agree whole-heartedly on the NICs … you need to get them a dedicated GB NIC or two possibly … you are correct that all are going through your MB NIC and that’s probably pushing it. I’d recommend a cheap PCIex1 GB NIC … they work fine for me for general purpose stuff like this. In the lab I use dual/quad Intel Pro/1000’s, but I don’t think you need that.

      Motherboard: I can’t find any hard evidence that the ASRock you have currently supports IOMMU, but you can go into the BIOS and see if it has an option to enable IOMMU. If it does, you can enable it, and then go into ESXi to Configuration >> Advanced Settings and see if anything is available for passthrough. You should at least see the SATA controller if it can handle IOMMU.

      RAM: It’s stupid expensive right now. If you can, hold off, or simply buy one or two 8GB sticks at a time. When I was forced to buy some RAM, this is what I did.

      Drives: If you’re running 5400/7200 RPM hard drives, and trying to run 3+ disk intensive VMs off that single datastore, you’ve got a bottle neck. A cheaper alternative here might be to add a couple of 500GB 7200RPM drives, and move one VM to each drive. Another option, of course, is to go SSD (you’d need 128GB or so) and put just your OS drives from each VM there.

      So, if it was me, I’d (1) check my BIOS to see if the current mobo supports IOMMU, (2) attack your drive speed issue with either separate 7200RPM drives for each VM or a single, larger SSD, (3) then consider your CPU, as the 8350 would definitely give you more power vs the 965, which would help overall.

      Finally, don’t be scared to cap Plex at a couple of cores. I work enterprise cloud hosting, and we see *all the time* VMs get worse performance with more cores, because they have to wait on the additional cores to be freed up. The 8350 would help here also with the 8 cores to spread around.

  • dakrzone

    Hi Don, great site! Thanks for sharing the wealth of information you’ve put up on the site. I’ve been drooling over your whitebox setups and reading many of your articles for awhile now and am finally ready to pull the trigger and build a similar setup to this one at home. I do have a question though on how much environment/resources this setup will allow me to have?

    Being new to VMware, I actually just recently finished a vSphere ICM class. I’m not real sure how many servers, etc. I need to have in my environment to really gain a mastery of a ESXi environment. I know there are several levels of certification and they may each require something different. But are there any resources that say kind of detail what all should be included in a VMware environment to ensure that I’m covering everything that needs to be covered? I hope my question makes sense…


    • Your question does make sense, and no, there are no specific guidelines listed anywhere. However, having a couple of VMWare certifications myself, I can tell you that you can get by just fine with a single node doing nested ESXi installs. However, if you’d like a solid lab to test DRS, SRM, High Availability, work with switches, etc, I’d recommend at least two nodes, and a gigabit switch (Juniper dominates in the two cloud datacenters I’ve worked in, especially the new SRX series). These nodes are plenty powerful enough for home labs, and they run my production VMs that I use locally just great. I would recommend each node have at least 4x GB NICs.
      If you have any other questions, don’t hesitate to let me know.

  • John A

    Hi Don, I stumbled across your site when I was spending countless hours researching how to build the perfect esxi server (or come close to it). It was nice to know that majority of the components I had chosen was roughly what you had used for your servers. I was a bit excited to hear that you can pass through a video card and have a terminal in the other room to display the VM. The setup i ended up with was an asrock 970 extreme 4 motherboard with AMD 8-core FX8350 CPU . I Probably could have done better with performance to price but i figured this set-up will last a long time. One issue i had when passing through the video card was, when i passed through the VC the VM recognises the video card and installed the drivers and after the machine reboot the VM comes up and the Video card has the light yellow triangle against it? (machine os is winxp) I also have a second video card for the terminal video exact one you’ve mentioned but the display seems to disappear have way through the boot of the esxi server. Would you know what could possible cause the video issue, more important do you need a specific graphics card to perform pass-through correctly. Any assistants when you have time would be much appreciate. My experience level with esxi is relativity new but I come from a networking background so i have management to get other parts of the system working except for the issues I am having.


    • John A

      my motherboard was actually the 990 FX extreme 4 (long day at work)

    • Sorry for the late reply: I missed this one somehow. This sounds as if you might be running ESXi 5.1. There are some passthrough issues in ESXi 5.1. I’d suggest rolling back to ESXi 5.0 for compatibility with passthrough if that’s the case.

  • Derrick

    Do you mind if I ask which linux distribution of XBMC you are running as a VM on your server? I was looking into OpenELEC, but it doesn’t seem to come in an iso image to be booted off of the datastore.

    • Not at all! Currently, I’m using XBMCbuntu. I’ve tried running XBMC under Windows as a VM, but not only do you have to worry about licensing, but I just don’t see the point, if that is all you’re going to be using it for.

  • Christian D

    I was hoping you could help me with a build. I am looking to build something similar to this but with 8 X ram slots. Can you recommend a motherboard that supports 8 slots on single 8 core cpu?

    • You will need to go with an Intel CPU to do what you want; as the LGA 2011 slot CPU boards are the only ones available with 8 slots and quad channel memory. All of these will run you on the other side of $200. ASRock has a really good track record with ESXi compatibility. You could go with the ASRock X79 Extreme 6 or the Gigabyte GA-X79-UP4

  • J

    Hi Don,

    Do you use PCI usb controllers, or are you able to forward specific usb ports from the motherboard to the VMs? Neat idea, think I’m going to try to virtualize my two HTPCs and a Nas4Free box all in one unit by forwarding the SATA controllers (a 990FX chipset with 8). It’ll be fun trying to figure out which port doesn’t get forwarded! 🙂

    • Actually, I’ve done both: passing through a PCI-e USB expansion card completely to the VM and assigning it a USB controller and USB ports. Although I have not explored this completely, the first option seems quicker, but I have nothing to back that up. I intend to do some benchmarks to see if there really is a difference.
      As for the SATA, it’s usually the highest # port on the controller. In the case of the ASRock 970 Extreme 3, Port 5 and the eSATA port in the back of the mobo are controlled separately, so you get ports 1-4 passed to the VM, and ports 5 and 6 (6 being the eSATA) assigned to the ESXi host.

      • J

        That’s awesome, I’m kicking myself for going Intel with my desktop, as it has enough punch to do a ton of stuff–but desktop nVidia cards just don’t passthrough it seems.

        Guess my next build is going to be all AMD. Are you planning on doing any articles on the how-to aspects of booting from a SAN/iSCSI?

        • Yes, in fact, I’m working on several articles, one which has to do with iSCSI booting. I got a little sidetracked since I got interested in booting my Raspberry Pi’s off iSCSI, and now that I have them doing it, I was going to add Windows, Linux, and then Rasperry Pi’s to the iSCSI boot post so it’d be more complete. Look for it next weekend.

  • Kelvin

    hi Don,

    You mentioned you used a 5Port PCI-E USB Port card for you build.
    Which card/chipset did you use for vm-passthru that works out-of-the-box?

    • Thought I posted that in the article, and obviously I didn’t. My apologies. The MCS9990 Chipset cards work fine for me. The specific card I used was however, this is a USB 2.0 card, and I’ve heard scattered reports of USB 2.0 passthrough NOT working in ESXi 5.5, but USB 3.0 working fine. I haven’t had time to check this out, I just thought I would mention it.
      I’m racking a new ESXi box at FDCServers’ Chicago colo facility for SRM replication/Disaster Recovery, and I’ll probably go 5.5 with that box along with a vCenter 5.5 appliance to manage everything, and I’ll do some testing and report back.

  • disqus_GIWWUafPsw


    On the build, if one wasn’t planning on utilizing any HTPC/gaming machines as part of the vm list, would the additional video card be needed at all? I am just making sure that I could cut the cost ~100$ by not using the second video card, and instead only using the console vid card(ATI Rage XL Pro 8BM PCI Video Card (Console Video))?

    Have you by any chance tested this setup with the latest builds to see if the 5.1 problems still exist?

    Thank you very much for the excellent write up!

    • Thank you for the compliments, and you are correct: if you were planning on doing no passthrough, no the extra video card would not be needed. You could further reduce the cost by purchasing a cheap case instead of the rackmount case.

      As for the new ESXi 5.5 builds, the problems do seem to be fixed, but there is a new issue that’s potentially more problematic and that’s that you really need to use vCenter to manage the VMs now, and that’s a paid product

  • Brian


    Now that 5.5 has been released, would you change anything in this build?

    I’m toying with the idea of a build like this. except i want to install to usb flash, and keeping redundant copies around. I just need to find a case where i can stick a flash drive in, and lock/secure it in place (similar to SD card).

    • No, I wouldn’t change anything. I’ve done some tests with this build with a straight ESXi 5.5 build, and nothing seems broken at all. All my personal builds boot from USB sticks, and I clone copies and keep them around for backups (flash drives are so cheap in bulk now days).

  • Tom, my apologies about the late reply: I was on vacation this last week and then had to settle back in at work.
    Your list looks good, and although building a server out may seem a bit daunting at first, it’s really not. Take your time, and it will come out well.
    As for local storage, I think you could simply go with a single 2TB drive as a local datastore to begin with, and sure, the SSD would work also. You could put high performance needs VMs on the SSD and throw everything else on the 2TB. When I built my first box, all my storage was a single 2TB green drive, and it did just fine.
    I’ll have a standalone forum setup by the end of the week and if you need any additional help, you can start a forum post there, and I’ll be glad to help out.

    • Tom

      Don, thanks! Got this all ordered shortly after I posted. Not sure why I was in such a little panic. Feeling good about the build and will update my reddit post as I go if you want to check it out. I gave a shoutout to your page.
      I am still in the dark in regards to hooking up router/switches and setting up VLANs but I can sweat that stuff later.
      As another user posted below, this did come out a bit pricier than $500. In my case, double, but I wasn’t bargain shopping. The local storage did account for a good chunk of that though.
      Hope your vacation went well!
      PS: Im curious what you’re doing with those rasberry pi’s.

      • The forum is now open at

        The prices definitely have increased, most specifically RAM has almost doubled in price since I set this up, so this build is definitely a little pricier. I could get 32GB of RAM for $120 last year, and can barely get 16GB for that same price now.

        Vacation was awesome, thanks! As for the Raspberry Pi, I’ve been doing a number of things. One is running a weather station here at the house, and took several of them and made the world’s weakest Hadoop cluster 😀

  • The ASRock 970 Extreme4 is a solid board and a worthy alternative. Although I have not personally worked with it, ASRock verified to me through an email that all of their 970 chipset boards have IOMMU capability.
    There aren’t a lot of lists of IOMMU motherboards out, unfortunately (a reason I’ve been doing this), but Wikipedia does have a list at (in fact, I’m listed as one of the sources).
    One motherboard that I verified that has IOMMU support this last week was the MSI 970A-G43, another 970 chipset board.

  • Richard

    Great article and comments. I am in the middle of deciding on upgrading my current home lab from a 5.0 environment (All be it very old) to something more upto date and future proof. One area I am interested in would be the rack mount case used as you mention that you used a 2u case but the photos you show are showing what looks like a 4u case (Which I believe you mentioned in the build list). Does the 2u case provide sufficient space for the MB, CPU & heatsink? Currently contemplating a 3u setup but if a 2u would work then there appears to be more choice for 2u chassis’s.

    • My apologies on the case oversight, and you’re correct, that’s a Logitech 4U case in the pictures. For this particular node, I ended up stuffing a bunch of drives in so I re-purposed the 2U for a diskless node. The 2U does provide enough room for the MB, CPU and heatsink and takes a regular, full size power supply however, it’s mATX only and requires low profile cards All 2Us require low profile cards, and you can find a FEW that will take full size ATX, but not many.
      A worthy option in the 3U range is the Norco 3U, which I also use 3Us are the first rackmounts that take full height cards, but they are always more expensive than the 2Us or 4Us
      If you’re not concerned with space, but still want to rackmount, then the Logitech 4U, although thin steel, is definitely cheap and huge inside or Rosewill 4U w/15 drive bays both of which are under $100

      • Richard

        No need to apologize as that’s what this area is for, asking questions etc….. Hopefully someone else will find this useful and save a question being asked.

        Thanks for the clarification as my setup will be for diskless hosts as all guest storage will be provided via a NAS solution. Current setup utilizes SFF desktop cases which were proving untidy as well as only supporting m-ATX. I’m in the UK so can obtain a 2u that supports ATX MB so going down that route. m-ATX MB’s are around but want as many features as possible supported e.g. IOMMU and haven’t found one as yet that supports this.

        • When you find a solid 2U case that supports a full ATX board, would you mind posting it? I’m working on a database of whitebox parts that I intend to make public on the site, and would love to add that to the case section.

  • Phil

    Hi Don, I have found your posts informative and inspiring so I have attempted to copy one of your builds. I have the Asrock 970 Extreme3, FX 8320 and 32GB of Corsair Vengeance ram running esxi 5.0 u3. It all went together well but I am having issues getting any video passthrough. I have tried a handful of different Nvidia cards which is all I have at hand. You appear to use Radeons for all your builds, have you tried Nvidia? Would you suggest using Radeon? Looking forward to your guide on video passthrough.

    • Phil, thanks for the comments! Definitely you’re going to need an AMD card; nVidia cards simply will not work for video card passthrough. This is the reason I use Radeons with all my builds. Remember, however, that if you’re using it for an HTPC, even very low end cards like the 92xx series will do for 1080p video.

  • TheRedBaron


    I was thinking of creating a virtual ESXi lab (using VM Workstation) based on the host 2 config you’ve listed here and wanted to get your thoughts on using a virtual lab with this config.


  • sr_stinson

    I have an Asrock Extreme 3 + FX-8320 processor running ESXi 5.5.

    First problem I have found is that the network (Realtek R8168, link shows as 1000Mbit full-duplex) is quite slow (~30Mbit) both to the ESXi host and to all the guest VMs (I’ve tried: a CentOS machine, an Ubuntu Machine and a WinXP machine).

    Any ideas? I’ve tried installing an aditional Intel PRO 1000T card, with the same result (~30Mbit)


  • bryan

    Slot Setup for the ESXi AMD Whitebox

    PCI-e x16: Radeon HD6670 (Passthrough to VM)

    PCI-e x4 : LSI SAS3041E 4-Port SAS/SATA PCI-e x4 (Passthrough to VM)

    PCI-e x1 : 5 Port PCI-E USB Port (for Passthrough)

    PCI-e x1 : GB NIC (RealTek 8168, used by ESXi host)

    PCI : Intel Pro/1000 MT Dual Gigabit PCI-X NIC

    PCI : ATI Rage XL Pro 8BM PCI Video Card (Console Video)

    ptional SATA Controller Card: LSI SAS3041E 4-Port SAS/SATA PCI-e x4 — $25


    i am confused as i am a newbie.
    how and why are the above things used and done.


    [email protected]

    • This is a very general, broad question. I’d recommend signing up in our forums and asking anything there, as it would be better to address any longer questions like this in a forum thread.

  • wihe1

    Hi Don,
    The HIS Radeon card is no longer for sale. Can you recommend an alternative?

    • Sure, you can just bump up to the next generation. The HD7750 should work fine and is low-profile.

    • bryan

      use any card with 500mhz clock and 512 mb ram.
      it doesnt matter for virtualization.

      • Actually, that’s untrue: it’s dependent on the use. For the ESXi console view, sure, any old PCI card would work, but not for HTPC use or gaming use.
        For example, I use HDMI over CAT6 adapters and USB over CAT6 adapters to send the video and USB signals to a monitor and keyboard/mouse in my son’s room (around a 75′ run from the rack in the basement) and my son uses a VM, with a high-end card passed through to the VM, as his everyday computer and gaming machine. I do the same for three HTPC VMs that each have their own lower-end graphics card and those terminate at TVs and wireless HTPC keyboards.

  • Alexandru Maran

    Does this board take ASRock 970 Extreme3 take ECC RAM?

  • Butch Pornebo

    it’s 10/2015. with the same budget what would you recommend using AMD and Intel ?

    • With the same caveats that I’ve always said about it: AMD for your home labs, Intel for production. AMD, unfortunately, has pretty much stood still in the server market, and although I love their APUs, they aren’t of any real use in the virtualization side of things. My latest build was a Core i7-5960x that was sent to me by accident by Amazon Warehouse deals w/16 x 500GB SSDs in a RAID10 on a 9260-16i w/Fastpath and 64GB of DDR4. Very nice virtualization box.

  • GC

    Hi Mr. Fountain I need a little clarification about storage on your configuration.

    I’m going to buy quite the same hardware configuration as posted by you.

    (Asrock 970 Extreme 3 2.0 + FX-8350 and same other HW you tested).

    I need to understand if I can avoid buying PCI-E SATA controller and use the motherboard internal sata ports, to use 2 3TB sata III disks as unique storage for VMFS and all stuffs, Esxi will be installed on flash drive for the beginning.

    Do you think your configuration will work with vsphere version 6.0 also?

    Thanks in advance

  • Maverick


    I’m trying to build an ESXi home white box for gaming, but one of my requirement is to have a board with IPMI KVM for remote administration.

    Right now i have this setup:

    MB: Asrock Rack EPC602D8A

    CPU: Intel(R) Core(TM) i7-3820 CPU @ 3.60GHz

    RAM: 20GB
    3 x 4GB DDR3 PC3-10600 (Kingston 99U5471-020.A00LF)
    1 x 8GB DDR3 PC3-10600 (Kingston 99U5471-052.A00LF)

    RAID CARD: LSI MegaRaid 8888ELP

    GPUs TRIED: NVIDIA GTX480 and AMD Radeon HD7800

    ESXi: 6.0U2

    The problem i’m having is that when i start my VM that has the GPU with passthrough causes the ESXi to complete hang.

    Anyone as seen this problem before? Any suggestion? Does anyone knows of any motherboard with IPMI (AMD or Intel) that would work?

    • You’re not going to get IPMI unless you either (a) purchase a server-level board, or (b) purchase an outside KVM. What you’re likely seeing isn’t a lockup, but rather when ESXi grabs control of the video card. This leaves the screen “as-is” from when it grabbed it, so it looks like the box is frozen, when it’s actually not. The solution is to put a 2nd, cheapo graphics card in for ESXi to use … if you really need that console view with a monitor. Or, you can just manage the whole thing headless with the vSphere client for Windows.