vCenter Server Build with a Starwind iSCSI Target

vCenter Server Build

vCenter Server Build

Up until now, I have been running vCenter as a VM inside my cluster.  There’s a large debate over the pros and cons of running vCenter as a VM vs. a physical box, and I won’t get into that here except to say it didn’t work for me and my specific requirements/desires.  To that end, I decided to build a dedicated Windows 2008 box to use as my vCenter server.  In addition, I decided that I’d let the box do double duty and also run Starwind iSCSI, (the free edition), to make it one of my iSCSI nodes and to take advantage of RAM caching on the iSCSI target.  Although most people recommend running nothing but vCenter on physical server or VM that you decide to use for vCenter, I’ve found that you can also run it as an iSCSI node with no performance issues.

Starwind iSCSI Software: An Overview

Starwind iSCSI SAN Free

Starwind iSCSI SAN Free

If you have never heard of it, Starwind is an amazing piece of free software put out by the guys over at Starwind software, that allows you to create an unlimited size iSCSI target, or if you want to create two highly available iSCSI nodes, you can go up to 128GB.  This is perfect for those of us that are looking to create an easy to use, high performance iSCSI target for our labs, or looking to run a highly available iSCSI setup, and I commend them for making such an awesome tool available to us.

Starwind iSCSI also allows RAM caching, using as much available memory as you have available.  Note that the free version only allows 512MB of RAM for caching, however, this is enough for a home lab, in my opinion.  If you’re looking for a larger amount of RAM cache on your iSCSI setup, I would heartily recommend FancyCache.  FancyCache sits between the OS and your drives, caching requests at that level.  If you need more than Starwind’s 512MB of cache on the free version, then you can use FancyCache to cache in front of the iSCSI connection.  It also comes with a GUI and you can monitor your cache hits, and performance through it.  I have run both the 512MB cache, full Starwind RAM cache, and FancyCache (at 8GB and 12GB of cache) and they all perform admirably with very similar performance.  What would determine your choice here, I believe, is how much cache you actually need.  If you feel you need more than 512MB and still want to stay in the free category, then I would suggest Starwind and FancyCache together.

Currently, I’m running 16GB on the vCenter box, and about 12GB of that is pure RAM cache for the iSCSI target.  That makes the iSCSI target bleeding edge fast, and I get great performance off of it.  My mind simply drools at the thought of a iSCSI box running 128GB of RAM and Starwind iSCSI.  A final bonus to all this?  It’s also on the HCL.  You can find out more information about the software at http://www.starwindsoftware.com/starwind-iscsi-san-overview and the caching at http://www.starwindsoftware.com/features/high-speed-caching

vCenter Server: Build Thoughts and Philosophy

Server Build

Server Build

As with all of my builds, this is done with a rack mount case, and uses all consumer-level hardware to make it as cheap as possible.  My goal is great performance at an attractive price-point.  Although I understand the benefits of running Enterprise grade hardware in a production environment, these builds work great for my home lab and does double duty running all the VMs in my household, such as my virtual HTPCs running XBMC in a VM.  Major processing power was not a requirement here (we’re not doing any heavy lifting), so I’m actually using a simple dual-core AMD A4-3300 Llano running at 2.5GHz that I purchased for $25 off eBay.  A $40 motherboard, 16GB of RAM, a cheap 64GB SSD for the OS, a 1TB drive for the iSCSI, two additional GB LAN cards, and a power supply let me slide in at $337.  Not bad for 16GB vCenter/iSCSI box w/three GB LAN ports.

A note about the hard drive that I use in this box:  Although I might be tempted to grab a 10K Velociraptor, or pick up a used SAS controller and use 15K SAS drives, with the RAM cache, and this being a home lab, I have almost no need for a blazing fast drive.  However, for this operation, I do not want a green drive.  I do need some responsiveness for the RAM cache pulling information, and green drives will spin down and cause delays.  My choice here is the Hitachi UltraStar series of drives.  These are enterprise level drives with a large cache, running at 7200RPM with high hour mean fail times.  These are great drives, and real workhorses.

In addition, I added an Intel Pro/1000 Dual GB NIC using a PCI-X form-factor that I’ve put in a regular PCI slot that I snagged off eBay for $12.  My article on running a Intel Pro/1000 Dual Gigabit NIC PCI-X Card in PCI Slot for more information on this setup.  That gave me three GB LAN ports: 1 to connect to vCenter on a management VLAN, and two for iSCSI.  Starwind allows you to choose what adapters you use for iSCSI traffic, so this works out perfectly, as I can deselect the management NIC, and all iSCSI traffic stays on the iSCSI VLAN without any issues.

vCenter Server: The Build List

vCenter Build List

vCenter Build List

If you’ve read other articles on my blog, you’ll know that I’m a firm advocate of eBay, and Amazon used deals.  Both of them have Buyer Protection, and Amazon’s A-Z Guarantee means that even if I buy from an individual for a used item, I still get a 30 day no questions asked return policy.  So many of my items I get at far below list.  The items below show the price I paid for them, while the links will take you to an item page on Amazon where you can see both list price, and a list of used items.

Without further ado … the list:

Total Cost: $335 ($272 without the case)

vCenter Server Notes

vCenter Build Notes

vCenter Build Notes

As an after-note, I’d like to make a comment on the Logisys power supply I listed.  I know a lot of people may turn their nose up at them, but I have been using these power supplies for almost 3 years now, and have never, ever had one fail on me.  The only computer in the house that doesn’t have one is my personal desktop, which requires an 1100W power supply.  Even my NAS box, which is pushing ten 2TB green drives (remember, WD20EARS only draw 6W at idle, with a max draw of ~8W at constant 4K read activity), runs one of these power supplies.

For ~$28 for a 550W power supply (this is plenty to drive these low-power boxes) that is lasting me 3 years+, I’m almost deliriously happy.  At that price, I always buy two, and put the extra on the shelf for the day they burn out or fail.  I have a dozen replacements sitting on my parts shelf.  And yes, I do actually have a dozen of these sitting there.

The vCenter Server Build

vCenter Server Build

vCenter Server Build

The actual build went without issue, and everything booted the first time around.  For installing the OS, I use Microsoft’s Windows 7 USB/DVD download tool.  Basically, it allows you to copy an ISO to a USB drive and makes it bootable.  You simply insert the USB drive, select the USB drive as a boot drive, and install your OS from there.   Considering the prices of USB drives, it’s not expensive to pick up a 8GB drive and keep it handy for installs.  You can find the tool at http://www.microsoftstore.com/store/msusa/html/pbPage.Help_Win7_usbdvd_dwnTool

For most of my newer builds, I’ve been using an iStar 2U Value rack mount case that I’ve found to be a wonderful case. Roomy and stylish, it’s built well, and even takes full size ATX power supplys.  The case has a grill on the lid that allows top fan power supplies to breathe, which is a nice touch.

The case has a fan mount near the front that holds two 80mm case fans, and to these, I attach Vantac Tornado 80mm fans, which are a high volume case fan.  Running at 5700 RPM and pushing 84.1 CFM,  these clock in at 55.2 dBA, so unless your rack is either (a) in a closet, or (b) in the basement like mine, I would suggest a quieter fan.  That said, these move huge amounts of air, and in this case, the air stream directly hits the CPU.  My temps hang around 10C for my CPU, and that’s pretty amazing.

Once the box was built, I installed the prerequisites for vCenter, installed it and it’s supporting applications (vCenter Update Manager, Syslog Collector, and so on), and then installed Starwind iSCSI and configured it.  There will be articles on installing and configuring Starwind as an iSCSI target for ESXi soon.

vCenter Server Build Pictures

Finally, you’ll find the requisite pictures below of the build.  Enjoy, and as always, if you have any questions, leave them in the comments below.

 

iStar D Value D-213-MATX 2U Rackmount Case

iStar D Value D-213-MATX 2U Rackmount Case

 

Vantec Tornado 80mm Case Fans

Vantec Tornado 80mm Case Fans

 

ECS A55F-M2 w/AMD A4-3300 Llano and 16GB RAM

ECS A55F-M2 w/AMD A4-3300 Llano and 16GB RAM

 

Intel Pro/1000 Dual GB NIC PCI-X in PCI Slot

Intel Pro/1000 Dual GB NIC PCI-X in PCI Slot

 

vCenter Server Build Completed

vCenter Server Build Completed 

 

Vantec Tornado 80mm Case Fan Demonstration

Watch this video on YouTube.
  • William Hardy

    Don,

    I’ve really enjoyed reading through your site. We’ve pursued many similar goals with our home networking setups. This comment isn’t exactly related to the above post, but hopefully is relevant.

    I’ve noticed the Starwind iSCSI target software is a reoccurring theme in many of your posts. It looks to be the backbone for your storage network. How has this performed for you? I considered it when revamping my home network, but was focused more on IOPS, and eventually went with FreeNAS (FreeBSD) for the ZFS implementation. It allowed for SSD caching on many levels, which helps alleviate running out of DDR cache on the Starwind system (I’m guessing?).

    Have you done performance benchmarking on the iSCSI implementation to see how it fares? It would be interesting to see it against Linux/Solaries/Nexenta/FreeNAS. I’m by no means touting these other solutions, I’m just looking for the ideal product for my use case.

    • William:

      Thanks for the compliments; I have a number of articles I’m behind on posting, and hope life gets out of the way long enough this weekend for me to do it.

      Yes, Starwind has become a major component of my storage system, along with local storage, which I use to a large degree. The performance has been excellent, especially with the RAM cache, and I have some posts coming within the next week detailing some speed comparisons between local storage, Starwind iSCSI (with and without RAM cache, and using Fancy Cache, which I’ve fallen in love with).

      Although I looked at FreeNAS, my lab rules have always included “no one box should ever be single-purpose”, thus a standalone-storage didn’t fit in with my rules, although I think FreeNAS is a wonderful product. Also, I made some huge purchases of RAM last year when it was very cheap, and I have ~300GB of DDR3-1600+ RAM sitting around in 8GB sticks. I also ran up on a lot purchase of 64GB Crucial M4 Sata6 drives very cheap, and have 25 of those laying around. My tax check went to very good use.

      My love for Starwind comes from the fact that (a) its free, (b) you can do HA iSCSI with the free edition, (c) its very lightweight, so I can run it on a vCenter server that only has a dual core Llano CPU, (d) it has good performance, (e) you can RAM cache with it, and (f) it will handle multiple NICs and you can choose which NICs it will use, so you can leave, for example, a NIC open for a management NIC. Two other medals I give it are (1) the iSCSI target is a file on the drive, so it can be backed up in addition to being RAID agnostic, and (2) it has a de-duplication feature, which I use extensively in the lab on SSD datastores and is quite effective.

      Within the next two weeks, as soon as I have all the figures together, I’ll be posting a comparison between NFS, Starwind iSCSI (both spindle and SSD with RAM cache and without), and an ioMega StorCenter iSCSI target including raw speeds and IOPS, so keep an eye out.

      And like you, I’m not touting a particular product. I’m always clear that my rules are to create the cheapest, most effective lab possible, and the “ideal product” is so highly subjective for each persons’ case needs, that I just try to lay out my findings. Like you, I’m forever open to new possibilities 😀

      • Christian D

        I’ve been reading your articles here (Thank you by the way!) and was wondering how many VM’s you could actually run in this current configuration? Also, I was thinking of building something like this with maybe 8 cores and 32gb ram and maybe adding a second box so I can learn more about esxi and fail-over to another host. Can you describe in detail how you are achieving 16,000 to 20,000 iops(what tweaks) as I am just starting to learn about ESXi and clustering. Thank you for all of your articles! I’m spreading the word to me I.T. colleagues about your blog! 🙂

        • Thanks for the compliment Christian, and glad you’re enjoying the articles. This particular box isn’t an ESXi server, where you would keep VMs, but rather a vCenter (control) server that also functions as an iSCSI (storage) server. Your plan sounds good, and if you’re looking for an 8 core/32GB build, then I’d suggest one of the ESXi server builds on this blog. The number of VMs is highly dependent on how many cores/RAM you give each VM. You *can* over allocate, but how far depends on your workload.
          As for the IOPS, I am using RAM caching using PrimoCache on the volume I create the iSCSI target on. You can find out more information on that here: http://thehomeserverblog.com/esxi/esxi-iscsi-raid-disk-performance-improving-through-ram-cache/

    • Have been doing some benchmarking performance, which I hope to post in the next couple of weeks, it’s simply taking a while, as it’s quite comprehensive, comparing iSCSI, NFS, local datastores, and a ioMega StorCenter device. Starwind I also do without RAM caching, with their native RAM caching, and with RAM caching using FancyCache (which I am quickly becoming enamored with). A quick check through shows using FancyCache as a 12GB cache in front of Starwind is producing the best results with IOPS in the 7,500-10,000 IOPS range using 8K (most of the VMWare workload falls here).

      • William Hardy

        I’m very interested in seeing your results! I recently built a home lab and installed FancyCache per your recommendation. I was able to dedicate 40GB ram to my SAS storage (majority of VM’s found here), and 10GB on my R6 SATA and R10 SATA.

        What a monster benefit! IOPS in the 16000 range. I’m also write-deferring for maximum write speed, as all my storage is on UPS.

        In case you’re interested, my results are from a Proliant DL380 G6 using an HP Smart Array P812 Controller. SAS attached MSA60 with 12 x 15k drives.

        Astounding how cheap the MSA’s are on ebay, best enclosure type for the money in my opinion (as long as you don’t use SSD – the cheap models cap SATA at 1.5gbps.)

        • Very nice: I’ve found that FancyCache is definitely the way to go with any drive caching, especially Starwind, as you can stick with the free version and still have a massive cache. As I have a couple of UPS’s, I’m also using delayed write, and after a few networking tunings, I’ve got my IOPS up in the 15,000 range, around where you’re at. compared to 200~300, for example, using a local datastore with a 5400RPM green drive. An absolutely amazing difference. Hope to be done with my results in the next week; I’ve just been busy, and testing is a lot of work 😀

          Thanks much for your results! If you’d be willing to provide some IOPS stats with and without FancyCache, I’d love to include them in my article! If so, let me know and I’ll drop you my email addy direct to yours (I can see it though the comment … I wouldn’t post it publicly of course)

          • William Hardy

            Don,

            Sounds good. Send me an e-mail. Specify which tool you would like me to use and which test specifications so I can properly line them up with what you’re running.

  • _d3_

    Hi, are you using the free edition of the Starwind? According to their site Cache size for free edition is limited to 512mb.

    http://www.starwindsoftware.com/images/content/datasheets/StarWind_iSCSI_SAN_Free_vs_Paid.pdf

    • Hello: you are very correct, and I had actually updated this article a couple of days after publishing it and had forgotten to push the new version to the site, so my apologies. The free version of Starwind is indeed capped at 512MB for the RAM cache, however, FancyCache performs admirably as a free extension of the RAM cache. You can use an unlimited amount of RAM cache with FancyCache, and can restrict it to a single drive; in this case, the iSCSI drive.

  • nbajam

    Great site, thanks!!
    Do you recommend buying vshpere essentials for a home lab?

    • If you can afford it, I’d recommend it. Evaluation mode lasts for 60 days, and although you could always backup your configuration and move it to another evaluation version for another 60 days, if you can afford to license it, that would be the way to go, since many features such as DRS, are not available without a license.

      • nbajam

        Thanks for that, getting slightly confused over the packages you can buy. As have just been on the vmware site again. What is up from essentials, as i see essentials you cannot vmotion. Or can you? (i can afford essentials, as i see it as a worthy investment for home lab) appreciate your feedback?

        • VMWare does a bad job at edition comparisons. Here is a much better chart. For vMotion, you’d want at least Essentials Plus, but it’s a huge jump in pricing. I’ve been lucky enough that my last job provided me three Enterprise keys to use for my lab. CDW has a great chart for comparisons http://www.cdw.com/shop/search/software-titles/vmware-vsphere-5.aspx

          • nbajam

            thanks again, chart helped. Yeah essentials it is!

            right next question.. the essentials is £513.22 for 1 year or £610 for a 3 year subscription. But I cannot find what the subscription actually means.

            does it mean I am covered for Vshpere updates for 3 years? so I would be in turn be licensed for vshpere 6 when it came out?

            cheers