Intel Pro/1000 Dual Gigabit NIC PCI-X Card in PCI Slot

Intel Pro/1000MT on ESXi in PCI Slot

Intel Pro/1000MT on ESXi in PCI Slot

The issue that a lot of people run into when creating ESXi home labs is getting enough network cards to properly simulate a production environment, so they are able to segregate all of their network traffic into proper VLANs. Consumer level hardware doesn’t normally come with 2 NICs onboard, and although adding additional NICs is possible, getting to four phyiscal NICs while actually keeping expansion slots for other things can be difficult.

To solve this, I’ve started using Intel Pro/1000 MT Dual Gigabit NIC PCI-X cards. It’s the same as sticking two PCI GB NICs on your board (remember that a PCI bus shares the bandwidth anyway, unlike PCI-e which has lanes), except (a) you free up a PCI slot for something like a PCI video card that you can devote to the ESXi host, (b) you get a solid, proven chipset for ESXi, the Intel Pro/1000, and (c) it’s actually cheaper. I’ve been picking up these dual NIC cards off eBay for <$10 each with free shipping, and you can’t beat that for dual NICs.

ESXi Networking Configuration: 4 NICs, Two Switches

So why four NICs or more per host? In modern server or business environments, you’ll usually see 4 GB NICs (in server environments, 4 10GB NICs are about all each host will be allowed). This allows you to segregate your Management, Storage, Fault Tolerance, vMotion, and VM Traffic into proper VLANs and then segregate them. It also allows you to simulate a production environment, and to also learn about networking and VLANs. From here, you’ll attach to two physical switches. The diagram below shows a good setup for 4 NICs:

ESXi Network Configuration: 4 GB Nics

ESXi Network Configuration: 4 GB Nics

I’ll make a post later on ESXi networking configuration specifically, but this should give you the basic ideas. Please note that you will need VLAN aware switches to do this properly. That means a managed or smart switch  Personally, I use Smart Switches, simply because they are cheaper. A Smart Switch is basically a Managed switch with more GUI. For my home lab, I was able to pick up two 24 port gigabit D-Link switches (D-Link DGS-1124t) for around $75 each, and they worked just fine. You could get away with 16 port switches, if you wanted.

PCI vs PCI-X

PCI has had a long and illustrious history, starting out with PCI v1.0. running at 33Mhz and using 5 volts, and later moving to PCI v2.1, running at 66Mhz and running on 3.3 volts. A 64-bit slot was also made for high end networking. So, you could end up with up to 4 different card types in your computer, as shown below. Modern consumer grade motherboards use the 3.3V 32-bit slots. PCI-X cards ran on a 64-bit port and the last version, PCI-X v2.0 ran the bus speed at 266Mhz or 533Mhz.

32-Bit and 64-Bit PCI Slots

32-Bit and 64-Bit PCI Slots

A PCI-X will fit in a 3.3V 32-bit PCI slot. But it’s longer! Well, it will still fit, it will simply hang out the back of the slot. This is fine, and the card runs, just at a slower speed.

PCI-X Intel Pro1000/MT Gigabit NIC in PCI-Slot

PCI-X Intel Pro1000/MT Gigabit NIC in PCI-Slot

Note that PCI-X 5V cards specifically have a tab configuration that will NOT allow them to be put into a 3.3V slot. If yours doesn’t fit, DON’T force it.

PCI-X Gigabit Network Card in a PCI Slot: Performance

PCI bus bandwidths can be calculated with the following formula: frequency * bitwidth = bandwidth. PCI busses operate at the following bandwidths (note that the 32bit slots are what consumer grade motherboards use, running at 66Mhz):

  • PCI 32-bit, 33 MHz: 1067 Mbit/s or 133.33 MB/s
  • PCI 32-bit, 66 MHz: 266 MB/s
  • PCI 64-bit, 33 MHz: 266 MB/s
  • PCI 64-bit, 66 MHz: 533 MB/s

And here are some other data bandwiths for examples and comparisons:

  • SATA 1 (SATA-150): 150 MB/s
  • SATA 2 (SATA-300): 300 MB/s
  • SATA 3 (SATA-600): 600 MB/s
  • Fast Ethernet (100base-X): 11.6 MB/s
  • Gig-E (1000base-X): 125 MB/s

So, a PCI-X dual GB NIC card running in a 66Mhz, 32-bit PCI slot should be able to max out both NICs running at full speed, with a bit left over. Of course, this is a theoretical maximum for GB NIcs, and most networks you won’t see it hit this. In addition, your home lab would rarely, if ever, hit this maximum throughput and then not for long (I’m thinking vMotion here, but then you’re limited by the speed of the datastore). So, in answer to the question of will a PCI-X dual GB NIC card work in a PCI port, the answer is yes, and it will still leave some bandwidth in the PCI bus for something like a video card.

In my home lab, the motherboards that I use (ASRock 970 Extreme3) have two PCI slots. I use one of these Intel Pro/1000 MT Dual Gigabit NIC cards in one slot and an old, PCI ATI Rage XL PCI video card in the other for the ESXi console. Plenty of bandwith to share between those items, and I get a solid, supported GB NIC card for a low price. For the remaining NIC I need (the motherboard NIC brings the total to 3), I simply use a generic PCI-e RealTek GB NIC card off eBay (they can be had for $7). ESXi 5.0 supports the Realtek 81xx chipsets, which is what they use. So, for <$20, I have 4 GB NICs on my ESXi host in my ESXi home lab.

  • W. L.

    This is a great write up. It explores the cheaper options for intel nics for ESXi.

    There is one problem however. Remember that ethernet is full duplex so the bandwidth per ethernet port is actually the speeds you posted x2. There is a reason intel put these dual NICs on 64bit pci slots instead of 32 bit.

    • You are unerringly correct here, and thank you for pointing this out.

      • paul_krupa

        The PCIx/PCIe, 32 bit/64bit, 5volt/3volt, duplex discussion is very confusing.

        I read some comments about buying the wrong stuff so I’ve reading about the differences.

        I pasted a wikipedia (found under pci-x) clipping below that I found helpful.

        It generally says pci-e and x have similar MAX bandwidth but “e” is simpler to put on a motherboard (shorter, fewer traces, full duplex). “x” limits the bus to lowest common denominator.

        First question: Did you edit the text to reflect the point made by W.L (duplex speed)?

        Second: is that Quad intel pci-e board a better choice than the dual intel pci-x? Is it supported?

        Since I haven’t built my box yet I haven’t tried any of this. Can these quad devices be split across VM’s when passed through?
        Thanks.
        Paul

        From Wikipedia:
        PCI-X is often confused by name with similar-sounding PCI Express, commonly abbreviated as PCI-E or PCIe, although the cards themselves are totally incompatible and look different. While they are both high-speed computer buses for internal peripherals, they differ in many ways. The first is that PCI-X is a 64-bit parallel interface that is backward compatible with 32-bit PCI devices. PCIe is a serial point-to-point connection with a different physical interface that was designed to supersede both PCI and PCI-X.

        PCI-X and standard PCI buses may run on a PCIe bridge, similar to the way ISA buses ran on standard PCI buses in some computers. PCIe also matches PCI-X and even PCI-X 2.0 in maximum bandwidth. PCIe 1.0 x1 offers 250 MB/s in each direction (lane), and up to 16 lanes (x16) are currently supported each direction, in full-duplex, giving a maximum of 4 GB/s bandwidth in each direction. PCI-X 2.0 offers (at its maximum 64-bit 533-MHz variant) a maximum bandwidth of 4,266 MB/s (~4.3 GB/s).

        PCI-X has technological and economical disadvantages compared to PCI Express. The 64-bit parallel interface requires difficult trace routing, because, as with all parallel interfaces, the signals from the bus must arrive simultaneously or within a very short window, and noise from adjacent slots may cause interference. The serial interface of PCIe suffers fewer such problems and therefore does not require such complex and expensive designs. PCI-X buses, like standard PCI, are half-duplex bidirectional, whereas PCIe buses are full-duplex bidirectional. PCI-X buses run only as fast as the slowest device, whereas PCIe devices are able to independently negotiate the bus speed. Also, PCI-X slots are longer than PCIe 1x through PCIe 16x, which makes it impossible to make short cards for PCI-X. PCI-X slots take quite a bit of space on motherboards, which can be a problem for ATX and smaller form factors.

    • Philippe Olivier

      So If I’m right, a dual adapter in a 32-bit PCI slot is already maxing out when I’m using one of the two ports? What if I put 3 cards (as I have 3 PCI slots) in my system? Still 125 MB/s full-duplex or do I have 3x125MB/s?

      • You’re correct here. The point of the post is more of a cheap workaround for a home lab that’s not going to be intent on maxing the adapters out, rather than something to be used in production.
        PCI is a parallel bus rather then a serial bus. It works on interrupts and as such only one card can actually use the entire bus, so it doesn’t matter if you stick 3 cards in, it doesn’t increase the total bandwidth of the bus itself.

  • pxp

    Hi,

    Could you please confirm whether Silicom PXG6I-RoHS PCI-X SERVER Adapter 6 Ports Copper Gigabit Ethernet would run on any of the latest Gigabyte/Asus Z87 motherboards? It’s specs reads:

    PCI Card Type: +3.3V 64 bit Card
    PCI Voltage: +5V (Min 4.75V, Max, 5.25V)
    PCI Connector: +3.3V 64 bit
    Controller: Intel: 82546GB / 82546

    I’m in the process of setting up a vsphere lab at home and hv already completed Cat6 cabling together with Cisco SG200 26-port L3 switch.
    Do you think it would work or alternately, could you point to similar 6-port nics? (of course the cheaper the better!! 🙂

    • This should work fine as far as fitting. My concern would be that there wouldn’t be enough bandwidth available on the 32-bit PCI slot to handle six full-duplex GB NICs. Although a home lab is rarely maxing it’s NICs out, if you’re using iSCSI, vMotion, and HA, it wouldn’t take much if your lab was busy.

      Looking on eBay, the cheapest PXG6I that I can find is basically $90. You can pick up an Intel Pro/1000 Quad GB PCI-e NIC card usually for around $79 (I lucked up and picked up an extra for $49 this week). This one is only $74.99 and I use two of these in my lab … they perform flawlessly: http://bit.ly/1933Bnz

  • Sure, I’m working on a dedicated forum for the site (the one now isn’t working) and should have it up by the end of the week. Can you PM me then and I’ll shoot them to you.
    By the way, these were a pain to setup, so it’s not just you. They do some odd stuff with their configs, and it took quite a bit of work for me to get it right.

  • Michael

    Don,
    I got the same card in a ga h87-d3h, but can only get 100Mb full duplex, is there any specific driver that needs to be installed? Tried on esxi 5.1 and 5.5 se result.

    Thanks

  • Shaun

    Hi Don great blog. Since you’ve been down this road i got a question for you.

    So I bought three NICs for my workstation and server.
    Workstation – Intel Pro/1000 MT Quad port PCI-X
    Server – 2xIntel Pro/1000 MT Dual port PCI-X
    the funny thing is that i wasn’t aware that pcix is not compatible with PCIe.
    Workstation – Asus Z87 Sabertooth(3xPCIe 16X | 3xPCIe 1X)
    Server – MSI Z68A-GD55(G3)(2xPCIe 16x | 2PCIe 1x)

    As you can see no PCI/PCI-X slots.And I’m pulling my hair. My first thought was shoving a Riser in there. Or sell these bad boys and go for PCIe cards which could literally bankrupt me. Any Ideas.Could really use some insight thanks.

    • Not really much of a choice here, unfortunately. You might could get six NIC ports for around $75 off eBay if you shop it (1 dual + 2 duals), but you’re going to spend money, either way you go.
      You can pick up PCIe-x4 Intel Pro/1000 Dual Gigabit NICs on eBay for around $30 http://bit.ly/17Kr63f which is not much more than buying individual generic PCIe-x1 NICS off eBay and taking the chance of getting cheap, low-performing knock-off junk.
      Pro/1000’s are great NICs, regardless and never need drivers. In fact, I *just* picked up 4 of the dual NICs off eBay in a bulk deal for $15 each.
      The PCIe-x4 quad NICs run a bit more, but can usually be had for $75 or so http://bit.ly/1gpq29y

  • Bruno Cunha

    Does 2 port and 4 port nic’s act like switchs? Can port 1 send data to port 2 without use pci bus?

    • Although I have not tried this, I would not think so. It’s still going to have to come on and off the PCI bus to go to the CPU.

  • disqus_VF8li09P5L

    Bear with me as Im a noob…upon you suggestion I just recently picked up a “intel Pro 1000” from ebay (pictures are of intel) and found out that its a rebranded hp nic. Do these have driver issues or should this work fine without manual intervention

    • These should work fine. it’s the actual chip on the board that it’s looking for, not the brand of the card. It’s similar to seeing different video cards of the same chipset.

  • rsriram

    A dumb question perhaps – Can I install the quad port ‘server’ adapters on a desktop motherboard.? I have a gigabyte ga-g41m-es2h_e and am looking for a quad port intel NIC so that i can install pfsense on it. I know I should be looking for pci-e based NICs, but the server/desktop terminology is throwing me off a bit. Nowhere it is written that server NICs cannot be used on desktop motherboards. Hope you can help me out with this..

    Thanks

    • Sure, there’s no reason it couldn’t be. The “server” designation gets thrown around too much, and usually means something along the lines of “meant to run 24×7”, “more reliable” or “normally not used in a desktop”. In this case, this is a venerable chipset, and Windows picks it up and auto-installs drivers for it, and you’re good to go.
      In fact, all of my ESXi nodes, which is what I use these on, are all using consumer, desktop motherboards. The shot is it installed in an ASRock 970 Extreme 3.

  • r1ckJames

    Don,
    I need your help! A friend and I are looking to start an MSP; we are both late 20’s so we don’t have a ton of cash. I was a gamer for a while and built some monster enthusiast pc’s. I still have some of those parts and have added to them since i started this server build. I am building a server for ESXi but this is my first try with desktop/pro-sumer parts. Getting stuck on network drivers so check out this parts list:
    -ASUS P6T Deluxe V2
    -Intel i7 920
    -12GB Gskill Trident
    -ATI HD5870 1GB
    -Intel 530 series SSD – 180GB (for VM’s)
    -2x 2TB WD Caviar Black

    *Tried install on this and failed at NIC presence; detected none. Think the onboard ports are Marvell. This led me to buy a “Dual Intel Pro 1000 MT Gigabit Ethernet Server Adapter PCI-X”. I installed it and tried again; still fails on network drivers. Keep in mind the onboard nic’s are disabled in the BIOS. Using ESXi 5.5 ISO from vmware.com site.

    ANY help is appreciated! This server build is killin me b/c im itching so bad to create vm’s, but IM stuck! Please help!

    • If you’re not going to purchase a license, you won’t be able to manage your ESXi node properly without vCenter. I’d recommend installing ESXi 5.0 with Update 1 if you’re interested in doing PCI Passthrough (I’m not sure if the mobo supports it though), or ESXi 5.1. Either of those should install without issue.
      If you’re still having issues, sign up in the forums and post a thread, and we’ll see if we can’t help you out.

      • r1ckJames

        Thanks Don! Just joined the forums; which category do you suggest posting a topic like this?

        • You could go with motherboards or the catch-all. Really doesn’t matter.

          • r1ckJames

            Created new forum post in the mobo section:

            ESXi on ASUS P6T Deluxe V2

            Thanks again!

  • cueone

    Hey Donald, this may be a stupid question but, here goes. I have a Synology gigabit NAS (iSCSI Stroage) connected to a gigabit switch. My 2 ESXi hosts also share the same switch. Everything is currently running on one subnet. Can having iSCSI storage, management and VM traffic one on VLAN negativley affect VM performance.

    • Absolutely, it can. You’ll always want to split off these into separate VLANs/subnets so that the traffic is completely separated. Also, if you’re doing multi-NICs for your iSCSI, you will want each NIC dedicated to iSCSI on a separate VLAN/subnet and your use set to round robin. So you should have a VLAN for management, public traffic, vMotion, and iSCSI. If you had FT (Fault Tolerance), then you’d want a separate VLAN/subnet for that also.

      • cueone

        Ok, thanks so much for replying. I went ahead and ordered 4 Intel Pro/1000 dual port gigabit NICs (4 NICs for each host). I already have a Cisco 2960 switch and a 2800 router so i can do proper VLANin. My synology NAS only has 1 gigabit connection so I guess I only need one VLAN for iSCSI traffic. This is just my home lab for VMware training and testing things for work, so I don’t need storage FT or anything like that. I do want to enable HA and DRS though.

        NAS:
        Synology Diskstation DS-411slim
        4 x 780GB 7200 drives.
        RAID 5 configuration

        2 HOSTS:
        Shuttle SH87R6
        Intel Core i7-4770 3.4Ghz
        32GB 1600MHz
        120GB Samsung SSD

        • You’re welcome, and it looks very nice. The Intel Pro 1000s are some great NICs, you won’t be disappointed. I’m running the quad PCI-e versions in all my production boxes. Feel free to join the forum and share your build! http://forums.thehomeserverblog.com

  • Gustavo Gomez

    will the Pro/1000 MT Dual Gigabit NIC PCI-X cards….fit on my pc?

    3.60 gigahertz AMD FX-4100 Quad-Core

  • Jared Mason

    Donald, I have this exact card, Intel J1679, in an old Dell PowerEdge 2900. I just installed ESXi 5.5u2 but it doesn’t seem to support this card anymore. In fact, I’ve tried looking it up on the vmware compatibility guide and it isn’t there as far as I can see. Am I just missing something here or do I need to go back to a previous version of the ESXi. Have you tried this card with the latest ESXi version?

  • IBMMuseum

    Great write-up, I wish I knew if the missing jumper pins on the adapter were for expanding the activity LEDs to a front panel…

  • Randy H.

    PCI-X cards will not fit in all PCI motherboards. I’ve had some with components mounted behind the slots and block the additional extended slot so the board can’t be inserted.