Building a Homemade SAN on the Cheap: 32TB for $1,500

Build a SAN Cheap

Build a SAN Cheap

One of the things I’ve been working on getting for my lab is a dedicated SAN/iSCSI box, and I was looking to build a quality, performing box, on the cheap. Although those statements usually don’t go together, you can build a quality SAN for a reasonable price.

For those of you following my blog, you know that a couple of months back, I built a cheap vCenter/iSCSI server using a low-power FM1 dual core Llano, and Starwind’s SAN/iSCSI software (the free version). It performed great with IOPS in the 16,000- 20,000 range, which is definitely not bad for a $300-$400 box. Also, I had a VM that served as my house domain controller and media hub, storing my 13TB of media, doing backups, and more.

What I really wanted though, was a standalone box with a hardware RAID controller that I could consolidate everything to, functioning as vCenter, SAN, iSCSI target, and backups with several TB to spare. Figuring it up on paper, I knew I should be able to build a 30TB SAN for around $1500, and set about doing just that.

This month finally sees the realization of that. One thing that slowed me down was simply money. Buying a ton of drives, a good RAID card, decent power supply and a more powerful CPU was going to cost a little. However, I got my mid-year bonus from work, and with a little from savings, got my shopping list together. Cutting some stuff down, I settled on some compromises, and got a Rosewill case that accepted fifteen 3.5″ internal drives, a LSI 84016E hardware RAID card with battery backup, and a few other items. Everything arrived this weekend, and I thought I’d document my build out, as I always try to do for you guys and gals.

Build Goals for Whitebox SAN

Build Goals for SAN

Build Goals for SAN

The build goals here were simple and the server would fulfill several roles (a) my physical vCenter box for my ESXi lab, (b) a SAN for the house, holding my 16TB+ media collection, (c) iSCSI targets for the ESXi lab and house computers, (d) a backup target for all lab VMs and house computers, (e) storage for video from my outside and inside cameras, and (f) a Subsonic media streamer for when I’m away from the house. This meant I needed plenty of RAM (as I’d be RAM caching in front of the RAID card) and more than just a dual core Llano.  In addition, I wanted the motheboard, even though it was a consumer version, to support ECC RAM if possible.  Not common, but there are a large subset of consumer grade motherboards that do support ECC RAM.

Still, with a large outlay going out for 16-24 2TB drives, and with the cost of RAM still high after the price hikes at the first of the year, I wanted to stay with consumer grade hardware, as I always do for a price break, and just to show that consumer grade hardware can be reliable and approach mid-range enterprise performance.

The actual deciding factor ended up being the case. Good quality cases with 16-24 hot swap bays are quite expensive. We’re talking $300+. What I did find however was a Rosewill 4U case with fifteen internal 3.5″ bays, lots of cooling, and plenty of room, and all for ~$100. So 16 drives it would be.

The hardware raid card turned out to be easy, and I picked up a PCI Express x8 LSI MegaRAID 84016E hardware RAID card with battery backup and cables included for $74.99 on eBay. Although this is a SATA II card, it gets good performance, is reliable, has great driver support, and was just dirt cheap. It also has online capacity expansion and RAID migration, which allows me to expand my RAID capacity once made, which was going to be important since I was going to create an array large enough to transfer over my stuff from my current array and then move the original drives over and do an online expansion, bringing me to a full sixteen 2TB drives in the final array. One of the goals was that I wanted to gather together all my disparate storage into a single group.

As for hard drives, I have to admit that I have a soft spot for Hitachi Ultrastar drives, and this is what I built out with. WD Green Drives, which I had a boatload of, aren’t hardware RAID friendly, I sold them off on eBay, bought Hitachi Ultrastars, and actually made a little money in the process. Running up on a case of 2TB Hitachis being sold by a data center clearing stock didn’t hurt either.

Network connectivity was another place I decided to change up how I was doing things both with the SAN and inside the lab. I had been monitoring prices on Intel Pro/1000 Quad Gigabit PCI-e NIC cards on eBay, and caught another seller put 10 of them on sale for a mere $49 each. This allowed me to pick up one for the new SAN, and then one for each of my ESXi nodes. So each node now has five GB LANs, with the motherboard LAN devoted to the management VLAN, since I don’t need stellar performance there.

Build Hardware List for Cheap SAN

SAN Hardware Build List

SAN Hardware Build List

Below you’ll find a basic list of all the hardware I used to build my SAN, and then a piece-by-piece breakdown and description of each piece, along with why I decided on that particular part. Understand that there are a ton of different ways to build a SAN, and that this is just the direction that I ended up going.

The prices that are listed are the prices I purchased these for, and you may simply not be able to match them, but with patience, you can find the same deals over time. Most parts were sourced off eBay or Amazon Warehouse Deals, or caught on sale, so you most likely won’t see the retail prices here.

Total Cost for Cheap SAN: $1,679.85

Cheap SAN Case: Rosewill RSV-L4500 4U

Rosewill RSV L4500 4U Server Chassis

Rosewill RSV L4500

Cases for SANs can be quite expensive, and since I didn’t really need hot swap bays, I went with this case as an alternative. You get a ton of room, decent build quality, and fifteen, count ’em, fifteen 3.5″ internal hard drive bays. It has room for dual power supplies if you want them, or a standard ATX consumer power supply. It has three 120mm fans in the front of the case, another three 120mm fans between the hard drives and the motherboard, and another two 120mm fans in the back. Although it has a lot of fans, the noise level is not loud, and the air flow is good. Overall build is solid, and I didn’t have any issues putting it together. Another great plus here is that the drive bays are all tool-less.

You’ll notice that I have 16 drives as well as two SSDs, which is more than the number of slots I have. The SSDs are mounted on one internal side of the case, while the extra drive is mounted on the other side internally. More than this though, you’d definitely have problems with.

Homemade SAN Motherboard: ASRock 970 Extreme 3 AM3+

ASRock 970 Extreme 3

ASRock 970 Extreme 3

This cheap, feature packed motherboard sports two PCI-e x16 slots (running at x16 and x4), which allows me to host both my RAID card and the Intel Pro/1000 Quad PCI-e Gigabit NIC card I have. It also has two PCI-e x1 slots, and two PCI slots (one of which will host a cheap, PCI video card for local output for the OS install and so forth. The motherboard gigabit NIC will be used for management RDP and will also feed out NAS and media traffic for the house computers and XBMC VMs, and a cheap PCI-e gigabit NIC will be on the backup traffic VLAN for the house. The motherboard has SATA3 ports that will work well with the SATA3 SSD for the OS, and four RAM slots, allowing me to drop 32GB of memory in using 8GB sticks.

This is the same motherboard that I use in one of my ESXi Whitebox builds for my home ESXi lab, and it’s a great board.  You can see that build at ESXi 5 AMD Whitebox Server for $500

Cheap SAN CPU: AMD FX-6300 6 Core

AMD FX-6300 6-Core Processor

AMD FX-6300

The recommendation from Starwind iSCSI, the software that I use for my SAN (they have a free version that allows unlimited storage for non-HA setups), is that more cores with a lower clock rate are better than fewer cores with a higher clockrate. Thus, I picked up this six-core AMD FX-6300 for $100. That’s a great performance to price ratio and will more than cover the heavy lifting for this box. Starwind recommends a Xeon E5620 at minimum, and the FX-6300 beats it out in performance, both single core and overall and the price is 1/4 of the cost of the Xeon. You can find a comparison of these at CPU Boss: http://cpuboss.com/cpus/Intel-Xeon-E5620-vs-AMD-FX-6300

Homemade SAN Memory: Crucial Ballistix 32GB DDR3-1600

Crucial Ballistix DDR3-1600 32GB

Crucial Ballistix DDR3-1600

Memory has gone up considerably since the first of the year, when I was buying 32GB DDR3-1600 kits for $120 off eBay. Ah, how I wish I had spent a grand or so on memory then. That’s not happening any more. That said, I caught someone on eBay selling off a couple of 16GB kits for $100 each, and grabbed both of them. I’ll want plenty of memory for the RAM cache I’ll be using for the SAN.

With 32GB total, I’ll leave 4GB for the OS and cache 28GB of RAM for use in the RAM cache.  This should be plenty of cache for normal operation of the SAN along with any iSCSI activity coming from the lab.  If this was a production environment, I might want to go higher.

 

Homemade SAN Memory: ECC Option

ASRock Extreme3 970: ECC RAM

ASRock Extreme3 970: ECC RAM

As stated prior, this motherboard does support ECC memory, although it is not listed on the MSI website, or any particular documentation that I can find.  However, ECC options do appear once the motherboard is loaded with ECC RAM, and an email to ASRock customer service verified that this motherboard does fully support ECC RAM.  Although it’s not common for consumer motherboards to support ECC RAM, it does exist in a number of them, both documented and undocumented.  As this particular board shows, sometimes, you simply have to try it.

ECC RAM is recommended in storage builds such as SANs.  Although the chance of memory corruption is low, and I personally don’t use it, I do like having the option, and wouldn’t recommend, simply for Best Practices reasons, of running the build without it.  However, I will say that this can increase the price of the RAM dramatically, so be aware of that.

Cheap SAN Power Supply: Corsair HX750

Corsair Professional Series HX 750W Modular Power Supply

Corsair HX 750W

This power supply not only has wonderful reviews, but is Gold certified and has 12 SATA power connectors, as well as 4 PCI-e connectors. Three Molex to dual SATA power adapters, combined with the 12 existing SATA connectors, allowed me to power all 16 Ultrastar drives, as well as the SSD drives. To this power supply’s credit, the documentation states that if running at less than 50% load, the fan does not run, and even with all 16 drives online, the power supply fan stays off.

Another great feature here is that the drive is semi-modular (the 24 pin and 8 pin CPU plugs are not modular), allowing me to use just the cables that I need inside the box.  Although I have a 4U chassis, with 18 drives and 8 fans, things can get crowded quickly cable-wise.

Homemade SAN OS Hard Drive: Crucial M4 128GB SATA3 SSD

Crucial M4 SSD 128GB

Crucial M4 SSD 128GB

These Crucial M4s have proven to be my go-to work horses for all OS drives in my home computers, and any physical boxes that I build out. They are lightning fast, and I have never had a single problem with them. I have two of these in a RAID0 for my system drive in my personal workstation, and get 1,100 MB/sec reads and close to 700 MB/sec writes. Also, for general system performance, one of the best ROI’s that you will get is putting an SSD as your OS drive due to the responsiveness gain. The SSD and the motherboard have SATAIII ports, so it’ll run at full efficiency.

There is also an additional one of these that is used specifically as an iSCSI target for node 1 of a highly available iSCSI node. Starwind SAN/iSCSI Free Version will allow you to create a 128GB maximum highly available iSCSI target, and this drive is the primary node in that.

Cheap SAN RAID Card: LSI MegaRaid 84016E

LSI 8416 RAID Card

LSI 8416 RAID Card

These are proven workhorses in the enterprise world, and although considered generally outdated for modern data centers, these are more than adequate for a home lab. They run in a PCI-e x8 interface, do RAID 0, 1, 5, 6, 10, 50 and 60 (I will be using RAID6), have 256MB of onboard RAM, and mine came with a battery backup unit and the cables, all for $80. Support for these is plug-and-play. I purchased SFF8087 to SATA breakout cables, and connected directly to the SATA drives. This is a SATAII RAID card, but with 16 drives, this won’t cause a bottle neck.

A note on RAID6 performance: although RAID6 isn’t known for it’s I/O performance, the RAID card cache, combined with the RAM cache I’ll be running on the OS almost cancels this performance penalty out. With 32GB of RAM, I’m using 28GB of this for a RAM cache, and am seeing performance in the 2,000MB/s+ range in both reads and writes into/out of the cache.

IMPORTANT NOTE: This card only supports up to 2TB drives.  If you are using this card in this build, then you will be maxed out at 32TB total before configuring your RAID.  This gives you a maximum of 32TB at RAID0, 30TB at RAID5, 28TB at RAID6, or 16TB at RAID10.  Please see http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?s&inc=7947

Homemade SAN RAID Hard Drives: 16 x 2TB Hitachi Ultrastar SATA Drives

Hitachi Ultrastar 2TB

Hitachi Ultrastar 2TB

For years, I have been a long-standing fan of the Hitachi Ultrastar drives. They are 7200RPM drives rated at 2 million MTBF and have a five year warranty. So far, I have only had one fail, and Hitachi overnighted me an advance replacement under warranty. They perform well, run cool, and I can usually find these on eBay at very competitive prices. This batch I found a data center dumping an entire case of them, sealed in static bags and cut them a deal for $50 a drive for the whole case. Note that Hitachi has an Ultrastar and a Deskstar … the Ultrastar is the Enterprise version of this drive.

Cheap SAN NIC: Intel Pro/1000 Quad Gigabit NIC PCI-e

Intel Pro/1000 Quad Gigabit NIC

Intel Pro/1000 Quad Gigabit NIC

Those of you who follow my blog know I’m a fan of the Intel GB NICs, and the quad NICs are no exception. The reason for all the NICs is that I decided since I only have three ESXi nodes, I was just going to dedicate a single NIC to each node. The fourth NIC is used for the iSCSI targets for the house computers.

Of course, these NICs will do jumbo frames, which you’ll want for any iSCSI application, both in the ESXi node as well as the SAN itself.  Another common thing forgotten about is that your switch must support jumbo frames also.  If any link between your SAN and the ESXi nodes themselves does not support jumbo frames, then it will short-stop you there.  An option here if your switch does not support jumbo frames is to direct connect your NICs with a patch cable.

Note that I very nearly went with fiber NICs for the SAN, at least for the ESXi node to SAN connections.  You can pick up fiber cards and cables very cheap off eBay.  However, prudence and common sense finally won out here in that this is my lab, and I would be very unlikely to saturate all four GB NICs, and even if I did, the saturation would be so brief that it wouldn’t make any sense to spend the extra money.  Many enterprise systems are still running off copper, and copper 10GB ethernet has a strong presence in Enterprise ESXi solutions.

Homemade SAN RAM Cache: PrimoCache

PrimoCache RAM Cache

PrimoCache RAM Cache

This software was previously known as FancyCache, and I’ve sung its praises a number of times. Using it on a single WD green drive, I’ve seen IOPS go from the 200 range to over 15,000 IOPS. Basically, it sits as a RAM cache between your drive and the OS, caching the data on a block level. It doesn’t have to be a physical drive, and works just fine sitting in front of a virtual drive a RAID controller presents to the OS.

Currently, it’s in beta, and free, and you get a beta license for 180 days. If you run out of that, you get an extended license that basically lasts forever. You can find out more information at http://www.romexsoftware.com/en-us/primo-cache/ Since I have 32GB of RAM on this box, I’ll be keeping 4GB for the system, and devoting a full 28GB of RAM to the drive cache. It has deferred cache, so it writes out its data to the drive at its maximum speed, but keeps the most common stuff in RAM.

Note that Starwind iSCSI SAN does have RAM caching, however, the free version is capped at 512MB of RAM cache per iSCSI device that you create.  Although this is plenty for a home lab situation, and I fully understand their reasoning behind not allowing full caching in the free version (you can’t give it all away for free), for my own personal use situation, I’d prefer having the 28GB of cache available.

 San on the Cheap: The Build

Homemade SAN Build

Homemade SAN Build

The build, as most of my builds, went without issue.  The Rosewill case has tons of room to work in, being a 4U, and probably the biggest issue that I actually ran into was cable management.  Not having a ton of patience for cable work to begin with, matters were complicated by the fact that my RAID card cables included with the LSI 84016E were the 1 meter long cables.  But, after an hour of mumbling and grumbling, the cable management was done to some level of satisfaction, and left alone.  From there, it was simply a matter of installing the OS (using the SSD as the system drive for speed), running updates, installing some base utilities such as 7Zip, NotePad++, CoreTemp and a few others, and running network cables for iSCSI.

After that, I created the initial RAID volume (it took approximately 10 hours to initialize in the background), transferred all my media from my old domain controller, and then moved all the empty 2TB drives from that to the new SAN, did an online capacity expansion (this took around 24 hours), and made a secondary RAID1 virtual drive as a backup drive for all servers in the house.  The final step was to install Starwind iSCSI SAN and configure several iSCSI targets: (a) 2TB general datastore for VMs, (b) 256GB SSD datastore for high performance VMs, (c) 128GB highly available datastore for SRM/Failover testing, and (d) 2TB datstore for additional “users” folders across the home network.  NFS shares were created for my XBMC VMs for my extensive video collection (1800+ movies, 160 TV shows, and 5,200+ albums), also.

Homemade SAN: Pictures

Homebuilt SAN: The Gallery

Homebuilt SAN: The Gallery

Below you’ll find some basic pictures of the build, along with some configuration screens and more.  Forgive my less than stellar cable management skills: I just don’t have the patience.  Note that the SSD drive, since it has no moving parts and doesn’t exactly require special handling, is tucked in a pocket in the front near the front drive bays.  It’s not mounted to anything, but since it’ll be a rackmount server, I wasn’t much worried about this.

Also, I included a couple of screenshots of CrystalMark DiskMark benchmarks on the RAID array.  These are presented both with and without RAM caching on the RAID array to show the difference.  Basically, I cache the entire RAID volume using PrimoCache.  If I was not using the RAID volume for iSCSI targets, I would not bother with RAM caching, as this is primarily a media storage array, and I don’t need any performance beyond what you see in the uncached tests.

 

Homemade SAN: Rosewill RSV-L4500 4U Case

Homemade SAN: Rosewill RSV-L4500 4U Case

Rosewill RSV-L4500 4U Accessories

Rosewill RSV-L4500 4U Accessories

 

SAN Power Supply: Corsair HX750

SAN Power Supply: Corsair HX750

Power Supply: Corsair HX750 Connectors

Power Supply: Corsair HX750 Connectors

 

Homemade SAN Motherboard: MSI 970A-G43 AM3+ w/Memory

Homemade SAN Motherboard: MSI 970A-G43 AM3+ w/Memory

SAN Assembled: Front Panel

SAN Assembled: Front Panel

SAN Assembled: Front Panel Open

SAN Assembled: Front Panel Open

Cheap SAN: Top View

Cheap SAN: Top View

Homemade SAN: Drive Bay View

Homemade SAN: Drive Bay View

Cheap SAN: Motherboard View

Cheap SAN: Motherboard View

Custom SAN: Side View

Custom SAN: Side View

Custom SAN: RAID Drives View

Custom SAN: RAID Drives View

Cheap SAN: Motherboard View

Cheap SAN: Motherboard View

Custom SAN: Rear View

Custom SAN: Rear View

Homemade SAN: All Racked Up

Homemade SAN: All Racked Up

Network Stack: Color Coded

Network Stack: Color Coded

Homemade SAN: CrystalDiskMark RAID Benchmark (No RAM Cache)

Homemade SAN: CrystalDiskMark RAID Benchmark (No RAM Cache)

Homemade SAN: CrystalDiskMark RAID Benchmark (With RAM Cache)

Homemade SAN: CrystalDiskMark RAID Benchmark (With RAM Cache)

Custom SAN: System Properties

Custom SAN: System Properties

Homemade SAN: Network Connections

Homemade SAN: Network Connections

Custom SAN: LSI MegaRAID 84016E RAID Array

Custom SAN: LSI MegaRAID 84016E RAID Array

Custom SAN: Core Temp (Idles @ 10C)

Custom SAN: Core Temp (Idles @ 10C)

  • dfortier

    Great read. Hopefully I can find a drive deal like yours. I’m trying to do a lot of the same things that you have done and get my certifications along the way.

  • Karthik Reddy

    That was a nice build. I could get everything for almost the same price except the HDD. I hope to get that kind of a deal.

    • Thanks Karthik. The key was just patience for me. Deals are to be had on eBay if you’ll wait them out and buy in bulk. A key strategy that I use is looking for bulk lots for sale with “Best Offer” options so that I can actually make an offer at a lower price.

      • Karthik Reddy

        Actually I see only the lots with 6-8 HDD’s lately.

  • rom1

    Hello

    How do you organize data & backups through the different setups? Do you use some specific RAID or duplication setup or one of your directory is automatically uploaded in a cloud (crashplan/backblaze)?

    thanks

    • It’s a multi-prong approach. One of the virtual drives on the RAID setup is a 2TB RAID1, and I use that for backups. For software, I use Bacula, an open-source backup solution that uses that 2TB RAID1 volume as it’s storage http://www.bacula.org/en/ It’s a centralized backup that backs up all the computers in the house, plus all my ESXi VMs, three times a week on a rotating schedule with 60 day rotating retention. Finally, the folders on that backup drive are backed up to Amazon S3 Glacier storage once a week with a six month rotating retention there.

  • You’re not paranoid at all, although I’ve never personally seen a corruption due to the use of non-ECC RAM, and I’ve been running SANs for years without it. I did include an option to use a motherboard (even consumer grade) that allows the use of ECC RAM for those that want to. The ASRock 970 Extreme3 that I use in my ESXi nodes uses ECC RAM and I use ECC RAM in those nodes.
    The choice of Windows was simply because of multi-use, rather than anything specifically geared to the OS. This box functions as SAN (iSCSI targets for the lab, the house, and also 13TB of media for the HTPCs in the house, vCenter server for the ESXi lab, Subsonic media streamer, and also a private WoW server). It handles it all with ease, and I get blazing performance off the iSCSI targets.
    Now if I was in a production state, I would have done a few items differently (including the ECC RAM). On the Windows side, I know several enterprise level companies using Starwind as a SAN solution and Starwind is exclusively Windows. This is simply presented as a multi-use SAN for house/ESXi lab use.

  • Brian

    Do you have any options for Sata III Raid cards?

    • If you’ll take a look at http://thehomeserverblog.com/home-servers/extensive-iscsisan-testing-by-the-home-server-blog/ then the test SAN I built out has a modern, well benchmarked Sata III RAID card in it, the LSI MegaRAID SAS 9260-8i 6Gb/s 8-Port RAID Card. That would work great here, and I picked the one I have up off eBay for $249. I’ll add this as a Sata III option. Sata III is still in its infancy in RAID cards, so I don’t know of any 16 port internal options.

      • Brian

        The closest, and most affordable i’ve found is: Adaptec ASR-71605E. It looks like it has 16 internal. What do you think about this one?

        • Looks like a solid RAID card with a good feature set like RAID migration, although I can’t find any independant benchmarks for it. eBay has one for $350. I’ve used the 68xx series from Adaptec, and they are solid controllers that perform well, so I’ll assume performance is better along this track with the 7xxxx series. You can read a great benchmark article about some 6Gb/s RAID cards at http://www.tomshardware.com/reviews/sas-6gb-raid-controller,3028.html

          • Brian

            Been thinking more and more about this. Would a beefed up 6Gb/s Raid card really offer any additional performance over 3Gb/s with spinning disks? As far as I can tell, the 6Gb/s cards would only benefit with the use of SSD’s? Is this correct?

          • For mechanical drives, there’s no performance increase for a single drive and in some cases, decreased performance. A good treatment is at http://www.trustedreviews.com/Seagate-Barracuda-XT-2TB-SATA-6Gb-s-HDD_Peripheral_review#tr-review-summary
            In a multi-drive RAID situation, I think a lot would depend on the RAID level and workload, but the few benchmarks I’ve seen show little if any performance increase there. It comes down to being able to saturate the link. My own benchmarks for a 6GB/s SAS controller with 4x15k 6Gb/s drives in a RAID0 vs. the same drives in a SAS 3Gb/s controller show no gains.
            SSDs *are* a completely different monster, and these have been show to truly shine when given the bandwidth. My own desktop runs two 128GB 6Gb/s SSDs in RAID0 on just the motherboard RAID and saturate the RAID controller @ 1,100MB/s … completely flatlined.

            In my opinion, I’d stick with a 3Gb/s controller for mechanical drives and pocket the money.

          • Brian

            I’m thinking that’s the smartest choice. Do you have any experience with ZFS or FreeNAS8? I’m wondering what the performance of a similar build using FreeNAS8 would be.

          • Have worked with both ZFS in a commercial settings, and FreeNAS8 in my lab. Unfortunately, I don’t have any benchmarks to offer up at the moment, although I’m doing an extensive set of them comparing all the major NAS solutions now. It’s just an exhaustive process, and will most likely not be ready for another month and a half or two.

            FreeNAS is definitely a viable alternative, but I myself have seen iSCSi performance better in other solutions. I’d say it’s definitely viable for a lab and media storage, though. I would personally think performance would be less than this build though. A pretty good comparison (although limited) is at http://hardforum.com/showthread.php?t=1755888

            The reason I went Starwind and Microsoft was simple economics and my own personal use case. My SAN functions as both the SAN, Media storage for the house, Subsonic Media Server, and a physical vCenter server. If wasn’t looking for all this, and just needed iSCSI storage, and media storage, then FreeNAS would be on my radar at least.

          • Brian

            I’d definitely love to see the benchmarks. after more reading, Openindiana looks like a consideration as well.

  • Frost Spire

    Good read. I will be using your guide as a foundation for a slighty smaller build.

    • Glad I could help out. Would love to hear about your build when you get it done.

  • Demosthenes_light

    Nice job and a good result. Did you ever consider Openfiler or do you have any experience with it?

    • Yes, I’ve had experience with it, and do like it. My decision here was more because this is an extreme version of a multi-use server. It handles a ton of stuff: SAN storage, DNLA server, music server, iSCSI target, vCenter server, private WoW server, and more. OpenFiler does one thing and well, but it’s just one thing. This was built around my own personal needs, as so many builds are.

  • Stefan Ytterström

    Hi, thanks for good inspiration. I finally gather all hw needed for my own build. Unfortunately I ran into issues with the LSI adapter. it could only detect 2TB out of all my 3TB Seagate NAS drives. Anyone have experiences using this card with drives bigger than 2TB? A real show stopper for me right now. grateful for all input.

    thanks
    Stefan

    • Stephan, if you purchased the 84016E, then you’ll be capped at 2TB per drive. I did mention this was an older card, but I’ll make sure and update that particular paragraph to be more clear about drive sizes; this was why I used the 2TB drives with this build. Sorry it wasn’t clearer..
      Here is the LSI answer on that, and also listing HBAs that will accept 3TB or greater. I’ll make sure and add this to the post
      http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?s&inc=7947

  • mike

    How does that RAID card work that i see it only has 4 ports out the back? how are you able to connect all of those drives?

    • If you take a look at the pictures, you’ll see some red cables leading up to the drives. The four ports that you see on the RAID card are SFF-8087 ports. They will accept either a cable that runs directly to a backplane that you connect drives to, or a SFF-8087 to SATA breakout cable, which breaks out a single port to 4 SATA connectors. That’s what I’m using. Here is an example on Amazon http://amzn.to/1i8kKP4

  • James

    Have you done any testing with VMWARE? I have a Thecus N7700 and 5500 and only get around 150-200MB throughput when using VMWARE. I also own a HP P2000 that I get close to 900MB throughput. I would just buy another P2000 but they are over 10K. Your solution looks like it MAY get me close for a fraction of the cost. What are you thoughts?

    • James

      I should add that both of my Thecus units when not using VMWARE and just connecting with Windows 2008 using the ISCSI initiator software I get 600+ MB.

  • Nick

    Don – do you have any more insight to share on finding those 2TB Hitachi Ultrastar drives? I’m having a heck of a time… for the past 3 weeks I’ve been making offers for 15-20 drive quantities from all of the sellers on Ebay (white labels too) to no avail. I can find 1TB drives in the right ball park, but these 2TBs – an in particular Hitachi Ultrastars – are rather elusive.

    • Unfortunately, that’s the problem with deal hunting a lot of times; it comes in waves, and you have large dry spots in between. Looking around on eBay, the deals are definitely dry at the moment on the Ultrastars. I see the occasional one at $80, but this may be one you’ll have to wait out, or shell out an extra $200-$400. Also, although they have 1/2 the mean time to failure rating, the Deskstars are exceptional drives, too, and I have successfully used them in several RAID configurations mixed in with the Ultrastars.
      Three of my best tips are:
      1. Doing a “Follow This Search” after you get your search like you want it … you can find this option at the top of the page on eBay and it will send you email alerts of newly posted items. I have about a dozen of these constantly running watching for specific items I buy a lot of … memory, Hitachi Ultrastars, and so on. To illustrate this, I picked up a dual 2011 socket motherboard I’d been after for 1/2 of the retail because I jumped on it quickly after being notified by a saved search.
      2. Watch for auctions and use an auction sniper … I personally use Gixen.com, have for close to 4 years now, and it’s totally free. I use their premium service for $6 a year, but I used the free version for 2 years with no issues. You can set it to watch auction items with what you want to bid and it bids in the last 2 seconds of the auction end to “snipe” the item. It also allows you to group items, and cancel the rest of the group if you win one of them. I cannot tell you how much money this has saved me. On Black Friday I picked up a Juniper SSG-140 Firewall for $33 using the service. All while I slept.
      3. Patience. I hate this one. Hate it. But sometimes, I’ve laid in wait for weeks to get the perfect deal on something.

      • Nick

        Thanks for the tips. I’m starting to lean toward a 1TB solution due to time constraints here. But I do appreciate your tips.

        From a “storage architecture” standpoint, have you considered (or played with) forgoing the RAID controller , and using Windows Server 2012 Storage Spaces? I originally approached this topic (lab virtualization) from the perspective of a VMware-centric worldview. But in digging into the storage market of late, storage spaces looks like some really neat technology for leveraging JBOD’s into something that resembles a “SAN replacement” option. By combining bunch of HDD space, and applying SSD caching (and mirroring or parity… it’s “software defined”)… in your storage spaces volumes/LUNs/whatever-the-new-term-is, you can produce the performance characteristics you want with a low-cost approach. Anyway, I haven’t dug into Storage spaces enough to know if this is viable for my lab. But on the surface, maybe I could combine SSD + HDD (not sure if I can do the memory cache thing in this model), replace the RAID controller with a generic HBA, and create an inexpensive SAN alternative exposed to ESXi hosts via NFS?

        • Yes, I have considered Storage Spaces, and actually looked into it. Basically, it’s software RAID, when you take it down to it’s core. My initial put off is that expansion of storage spaces isn’t quite as simple as RAID expansion through hardware RAID cards, and this was a big deciding factor at the time. I purchased a 16 port card, but didn’t fill it up at first, and wanted the option, for example, to go from a 12 drive raid to a 16 without too many issues. Admittedly, I haven’t done a deep-dive into it, but I’ve scanned enough articles to know it’s more than plug and go.

          That said, I think it would still be a great foundation. You could still use Starwind iSCSI/SAN Free Edition for iSCSI targets (it just outperforms Microsoft’s iSCSI targets and is easy to use, to boot) in this case, too. The only possible limitation I see is if you’re planning to use a LOT of drives. You’ll still need a HBA card to add additional drives over what the MB can handle, but you’ve already thought of that.

          If you decide to go this route, I’d love to see some benchmarks and/or feedback on how it’s working for you.

          The great thing about a SAN is the actual OS/software side can be approached from so many different angles and they all “work”.

          • Nick

            Yeah, if I end up going this route I’ll follow-up and let you know how it works out. From the digging I’ve been doing on Storage Spaces, it sounds like it was a byproduct of work on Azure (Microsoft didn’t want to pay the storage tax either!). Yes, it’s software RAID-ish, but everything I’ve read points to it being quite good and – short of the replication bits built into EQ/etc… the performance characteristics are very SAN-like, particularly as you scale out the cluster (assuming clustering).

  • ebuddydino

    What should be the cost of running?
    Have you tried looking at Calxeda products?
    Energy core solutions using ARM 64bit SOC with built in
    2 HBAs.
    Considering Energy consumed by any convenctional approach of SAN build i feel Calxeda solution is the best for SAN.
    i have seen their products and it blows your mind.
    have a look at this
    http://www.calxeda.com/solutions/storage/

    cheers
    [email protected]
    kindly add me on Gtalk

  • DaPooch

    Nice post. Have you done any testing on power consumption for this beast? I’m curious to know what it would cost to leave one of these running 24×7. I’m currently working on consolidating all of my stuff to one box and using ESXi and passthrough RAID to a SAN VM rather than a dedicated SAN box. Any comments on that strategy? I’m thinking that would also take the network bottleneck out of the equation since I could leverage the 10Gbit virtual switch that way.

  • oespo

    Don any chance we can talk offline i just built my SAN following your build set. Just having some issues with it simple but since you’ve done it could same me some time. Also what do you as a DLNA Server?

    • We can talk: I’ll email you shortly at the email addy that you entered for Disqus. As for DNLA, I don’t set one up. I use XBMC to pull all my media in house, and Subsonic to push all of my media out of the house.

  • alvin

    Don, this is really nice setup. I am trying to locate the 16th HD and the SSD’s from the picture. I have the same case and I am trying to have 16 drives as well. Would you please describe how the other drives are installed? Thanks.

    • The SSD drive, which I use as the OS drive, I simply velcro’ed to one interiod side. The 16th hard drive, I drilled two holes in a side wall and mounted it using the bottom screw mounts on the drive. Worked out perfectly.

  • Xiao Brian

    Don, can you offer any suggestions for a SAN controller that will run cross-platform? I see you’re using Solarwind, but the VFX shop i will be reconfiguring is Mac/Linux only and we will not be implementing any Microsoft products in the farm. Also, can you mention here the options for scalability. At some point in the near future we will need to expand beyond one storage array and possibly link 3-4 together. We have been looking at the Proavio DS316FB which is expandable to 192TB. Any information you can give would be greatly appreciated.

    • It’s Starwind, not Solarwinds, and it’s targets are cross-platform, as iSCSI is OS agnostic. The situation is simply that you don’t want to deploy SAN software itself on Microsoft.

      This question has the ability to generate a fairly long response and discussion. I can give you a good list of possibilities, but I’d recommend joining our Forum and posting the question, since not only can I answer it at length, but a number of our community members use other SAN software and would probably chip in. http://forums.thehomeserverblog.com/index.php

      • Xiao Brian

        My apologies for calling it “Solar” Starwind is now all Windows-based as far as configuration and mgmt.

        “StarWind Software recommends using the latest Server-class Windows
        Operating Systems. StarWind supports all Windows Operating Systems from
        Windows Server 2008 to Windows Server 2012, including Server Core
        editions and free Microsoft Hyper-V Server. It is not recommended to
        install StarWind on Windows Storage server or Web server editions.”

        I need something Linux based preferably to run on RHEL or CentOS like “OpenFiler.” I’ve posted the question to the forums. Proavio is looking better and better every day.

  • Balazs

    Hello Don!

    Would you please share how did you configured the StarWind Software properly for you home lab? I’ve just installed but not sure about the settings.

    Your answer will be appreciated.

    • mame

      Hi Don,
      Nice post indeed! I followed all (most) of your Hardware suggestions but wondering about the whole Setup configuration of StarWind and Network?
      Many thanks,
      Markus

  • hste

    Hello Don

    Nice build

    I am thinking on building sth like this for a homelab, but wonder if it would be possible to do it using the SAN as one of the esxi server in a esx-cluster,and the SAN part as a VM or just use it as a dedicated SAN ?

    hste

  • Aaron Shumaker

    Good case suggestion. 120mm fans will definitely be quieter than alot of the stock fans you get with other servers. Those little 80mm fans or smaller have to run at really high RPMs to move same amount of air, so they are very loud.

  • Edward Luck

    Dare I ask how many watts this thing pulls? Or more to the point, what the standby load of your house is?

    • Although I missed this several months back, I do want to comment on this. Not nearly what you think it might. SATA drives pull around 5W each, the FX-6100 is a 95W processor, SSD drives are just a couple of watts, the motherboard is a few watts … it doesn’t burn much power at all. Most systems, if you remove the power hungry video cards of today’s modern video cards burn little power at all. You can do some easy calculations at http://www.extreme.outervision.com/psucalculatorlite.jsp one of my favorite spots to do calculations.

  • William Kidd

    I was thinking about buying a disk array but then I found your article! Are the drives hot swappable? Also, I was thinking fibre channel. Also, could I start with a few drives and add more later on? What do you think?

    • Yes, the drives are hot swappable from the card, as are probably 99% of all hardware RAID cards. However, the drive bays are not hot swappable. If you want that, step up $100 and get Rosewill’s hot swap case. No longer sold on Amazon, but you can find it on Newegg: http://bit.ly/1meuBkE

      You can absolutely start with a few drives, and add more later. The LSI cards have an online expansion feature. However, this takes time (from experience, 2-4 days if you’re in the double digit TB values), and puts a great amount of stress on the drives. It’s quicker, and less stressful, in some respects, to have 1-3 5TB external drives to copy data to, destroy and recreate the raid, and copy back. I now have four 5TB drives I keep for backup of lab and everything.

      Fibre channel is of course possible, but you’re talking increased price, so take that into account. My newest SAN is using Infiniband. Cheap, fast, and powerful.

  • Jacob

    Don, your home setup is truly astounding and the blog has been a lot of help in planning my builds. I’m really looking forward to building an iSCSI SAN like the one above to go with my upcoming ESXi box. I just have a few questions about connecting the iSCSI storage to the ESXi (and then making it available from the LAN to portable computers).

    1) I own a HP 4202vl-72 managed switch (with additional 16 and 24 port gigabit ethernet modules) but as far as I know this switch does not support Jumbo Frames. What would be a solution for this issue? Can I simply run a Cat6 cable from each of the four NICs on the SAN to four ports on the hypervisor and pass one through to each VM or would it be better to stick with the standard 1500 MTU and have them go through the switch in a separate VLAN as the LAN/Internet traffic?

    2) How many Ethernet ports should I (ideally) have on both my SAN and ESXi boxes? Given that you went with the quad NIC above I would think that is sufficient for the SAN but what about the VMware box? Am I correct in thinking that I need one port to the host for management which should connect to the standard LAN and for each VM I’d need to pass-through one port for connecting to the LAN for Internet/printing/etc and another port connecting to the SAN?

    3) How would I be able to make the SAN’s iSCSI storage available to portable computers (several MacBooks to be specific) that move from place to place (ie. work, school, etc) regularly and connect to the network via a WLAN? I know that you cannot combine iSCSI/SAN traffic due to the sheer volume of packet transmission between the SAN and connected servers (hence needing VLANs) but would creating a file server in a Mac OS X Server VM running on the ESXi Host and then making that file share available to the LAN be a viable option to make the storage available to the portable LAN clients? They will primarily be using it for Time Machine backups as well as some file storage.

    Best Regards,
    Jacob

  • Steve Ballantyne

    Hello Don, many years ago I built a crummy “sync” server using five 1TB drives and it turned out to be pretty useful (still dumping things to it 5 years later!). I would like to build another, but I need to justify my reasons. I am trying to avoid buying more EMC SAN storage – because frankly, it costs too much and the administration is absolutely miserable (give me FreeNAS – PLEASE). I am trying to compare your specs to my fibre attached EMC SAN drives, and the figures just don’t add up. I can’t run CrystalDiskInfo (doesn’t support ‘attached storage’) so I had to run HD_Speed. But I can make it use 4k blocks and compare figures. While you are getting 52MB without caching, I am getting 25MB on my EMC SAN – fiber attached drives. How is it that your SATA array is actually outperforming my fancy EMC SAN “powerhouse”? I feel like I am doing something wrong. If not with my comparisons, with my network configuration. 😀

  • Richard

    Just would like to confirm my understanding. This is fundamentally a
    computer with 16 drives attached to it, put into RAID5 at a hardware
    level, and then the large volume is made to look like iSCSI?

    What’s the fault tolerance of this? There is the potential to lose the entire array, correct?

    Further, I can’t seem to find Starwind iSCSI SAN. Just Virtual SAN.

    Much appreciated for the writeup. Thanks.

    • Richard, you are correct in your assumption, however, you seem to be making a faulty assumption that a commercial SAN will include anything different than this. Having worked in cloud hosting for close to a decade and with SANs and storage arrays for most of my life, I can tell you that redundancy as in a cloned SAN so that if you have a SAN go down you can come back up with little or no downtime is a step most companies never get to. SAN = Storage Area Network. At its most crude, most base level, is a computer with a lot of drives attached to it, in one of several RAID configurations (depending on needed performance) with an iSCSI driver that presents targets and accepts connections. Everything beyond this is gravy. In this setup, the only fault tolerance comes from the RAID level itself. More can be added easily such as multi-path I/O by adding an extra switch in play, or the ultimate, a high-availability device (remember the free version of Starwind *will* allow you to do a 128GB HA Device … I keep two SANs in an HA (this primary “BIG” SAN, and a secondary SAN that only has a 128GB SSD that is a mirror of a 128GB SSD in the primary SAN); the Primary has my lab (around 50 VMs), while the HA SAN has .

    • In your post, you talk about Fault Tolerance, but you ask about losing the entire SAN, which falls more in the realm of redundancy. They are two terms which are often used to refer to the same thing, but Redundant and Fault Tolerant are actually very different – and one certainly doesn’t imply the other. You can, of course, lose the entire SAN, but this can happen on large commercial systems … about 9 months ago at work, we lost a Fault Tolerant, Redundant SAN with about 800 VMs worth of data on it, and in the end, recovered 22 VMs’ worth of data.. I got to contribute 4 pages to the RCA on that, and it was ugly, to say the least.

      That’s not to say it happens often, but even the large commercial systems are fancier versions of what you see here. What I put forth in this post was a way to build a cheap SAN for *lab use*; I would not use this in production use (not to mention the EULAs of the software used bind you to non-commercial use). My house is Fault Tolerant to some degree with a large UPS and a whole house generator, gut let’s just look at the SAN as presented.

      Although RAID has the word redundant in it, RAID is really Fault Tolerant. RAID6 has the ability to lse two drives and still rebuilt itself, although it could lose more during a RAID rebuild.

      As for Starwind, they have done some renaming with the new version 8. What you are looking for is StarWind Virtual SAN for vSphere Free. Same product, different name, and new abilities such as usinf SSDs as Level3 cache,

  • Adam

    Hey Don, great blog… I’m just going through the StarWind manual (love this software) and it says: “It is recommended to provision 256MB–1GB of caching per each terabyte of the HA device’s size. Although, the

    maximum recommended cache size is 3GB. For most scenarios, bigger cache is not utilized effectively.”: You allocated 28GB, am I missing something here?

    • Adam, thanks for the compliments; I haven’t had the time this last year to keep it up like I’ve wanted to, but the material is still quite relevant. As for the cache, the newer versions of Starwind have changed a LOT, including L3 cache with SSDs, the RAM cache and so on. It doesn’t surprise me that the RAM cache settings have changed, and I would go with the newer recommendations.

  • Robson

    hey man. nice article. I used it as a base to try to cobble together my own SAN for a project I am working on. The thing is I want to use Fiber Channel instead of iSCSI, but it looks like getting Windows to run a FC target is going to cost an arm and a leg. Do you have any experience doing so on the cheap?

    • Trying to do fibre channel isn’t necessarily going to be expensive. The key with staying affordable for home lab stuff or home server stuff is to stay a generation or two behind. That doesn’t mean it’s bad or slow. You can pick up 2GB/4GB fibre channel hardware fairly cheap on eBay, and that’s still quite a bit more than GB ethernet. Just use it with a regular RAID card and regular drives. Here’s some 4GB Fibre stuff on eBay dirt cheap as an example: http://www.ebay.com/itm/Dell-ND407-Emulex-LPE1150-E-4Gb-Fibre-Channel-PCI-E-PCIe-x4-FC-Network-Card-HBA-/191475066081 The fibre cables will be more, and so on, but you can still get away fairly cheap.

  • Well, the 32TB is raw storage. You’re going to lose some for the parity drives in RAID. I run this in RAID6, which means I can lose two drives, so formatted, this is somewhere around 27.5TB of storage. This includes Acronis True Image backups of all computers in the house (although I’ve now moved to all Raspberry Pi 2’s running OpenElec for my HTPCs now), my music collection, around 2,500 movies, 90% at 720p or more (about 70% of that is 1080p), home pictures, home videos, and my own freelance video editing work. It really doesn’t take that much to fill it up. Currently, I have around 8TB free.

  • Neil Andrew Cerullo

    Awesome build, and thanks for sharing Don.
    Could you go over the connection you are utilizing to read and write to the SAN? I have one workstation I use for 4K+ video editing, with cinema DNGs – one minute of footage is 20GB. So I’m trying to build a pretty large SAN with great transfer speed. What is the best way to have max performance between one computer and one SAN?

    • If it’s just between a single server and the SAN, then definitely, I’d go for a direct cable between the two: CAT5e or CAT6 both can do 10GB at that length. You would need 10GB NICs, but that would do it. Modern NICs automatically detect if it’s a direct connection, so no need to worry about cross-over cables or all that mess. The quickest way period is to do Infiniband. You can see speeds up to 40GB/sec with Infiniband, and with a direct connection, you don’t have to worry about a lot of other things you normally need like an Infiniband switch. You can google ESXi5 infiniband, and you can find a number of articles on it.

      • Neil Andrew Cerullo

        Wow, that Infiniband is crazy fast – and crazy pricey. I think it’s out of my budget for now. 10Gb NICs are looking good. Thanks Don!!

        • Not really. If you look at the current generation, yes, but the 10GB and 20GB Infiniband adapters are dirt cheap. Remember … it doesn’t hurt to step back a generation or two if that’s quicker than what you’re currently running. I’m using the HP 452372-001 Infiniband which you can find here for $49 … if that’s what you’re talking about, then my apologies. http://www.ebay.com/itm/HP-PCIe-4x-DDR-dual-port-HCA-452372-001-448397-B21-/141580263285

          • Neil Andrew Cerullo

            brilliant! Clearly I need to do more research here. I’ve built so many computers and RAID arrays, but really haven’t experienced custom SANs and advanced networking :/ its like learning all over again haha.

            Thanks again! I’ll do more digging…

          • Neil Andrew Cerullo

            Hi Don,
            In regards to budget, which would be faster? A SAN or a NAS build? Thank you!

          • These are just differing technologies using the same hardware. A SAN is traditionally a box using Fibre SAN connections, while a NAS is IP based (and a generalization is that NASes share stuff at the file level like a media server that also shares mountable folders or network drives, while a SAN shares out LUNs), but the lines are blurring between these. For example, Starwind’s iSCSI SAN software can run iSCSI over IP based NICs. or Fibre cards. You can pick up 2GB and 4GB fibre cards cheaper than good Intel Pro/1000 Gigabit NICs nowdays, and with the fibre cables added, it wouldn’t be that much more than just sharing over ethernet. So it really comes down to technologies. Ethernet is always easier/cheaper, but older 2GB/4GB Fibre isn’t that much more, and performs better.

          • Neil Andrew Cerullo

            Great info, thank you.
            I’m just worried about drives and updates. The older infiniband is nice and fast, but i have zero programming and coding experience. command lines are fine, as long as there’s a good wiki for me to troubleshoot with. I’ll check out the Fibre tech – 2GB or 4GB per second would be great throughput for me. I’ve read up a lot on FreeNAS, FreeBSD, ZFS, etc etc, and I really want to setup a ZFS system with a good amount of drives – probably 36 drives. What’s the best way to configure such a setup? Just in terms of the type of controller for all the drives. I’ve seen cards that say they support 128 drives, but only have 4 ports…

          • They support 128 drives through backplanes, port expanders, or RAID expanders. An example of a RAID expander: http://ebay.to/1EBJbz9 If you had four ports, you could use four of these if you had a chassis that had enough drives, or you could get a RAID card with an external port and run a cable down to another chassis with one of these in it, and keep expanding. You can even jump from one expander to another, expanding until you hit the limit of the controller. Of course, in the end, you still only have say four ports that your cramming the data from 128 drives through and you’ll still top out at the PORT maximum speed.

          • Neil Andrew Cerullo

            That’s what I figured – thanks for the link and clarification Don. Where would you anticipate bottlenecks in a system like this? http://www.ebay.com/itm/Coraid-SuperMicro-EtherDrive-SRX4200-S2-10GbE-6GB-4U-36-Bay-SAN-Storage-Server-/400883676449?pt=LH_DefaultDomain_0&hash=item5d56877521

          • I don’t usually do system reviews on the blog, but my problem with it is the price for what you’re getting. It’s older X5550 CPUs, only 6GB RAM, NO DRIVES INCLUDED, and 5 different RAID controllers none of which have 3TB+ drive support. 5 controllers means that you’re going to have 5 different RAID volumes, not one (and 5 different things to break). Anyway, for that price, you could build out more than what I have in this article (32TB RAW w/32GB of RAM), and that includes drives.

          • Neil Andrew Cerullo

            Right on, super appreciate your insight. For ZFS I know a ton of RAM is ideal. If you had to connect 24 or 36 drives into one ZFS array, how would you do it?

          • Balázs Bódog

            Hey Don, which Fibre cards are you referring to? I have built a SAN almost similar to yours, it works fine but I’d like to go beyond the 1 Gbit ethernet connection. I have purchased a pair of cheap Emulex LPE11002 cards, but can’t get them to work as you described here. They show up as storage controllers, not as NICs so StarWind and ISCSI is not an option. I have tested quite a lot of SAN software already which claims to have FC target support but they are all either super-expensive or they doesn’t support these cards. The only software that actually worked was Datacore’s SanSymphony. So could you give me an example which cheap FC card works as a NIC with StarWind?

        • A good article on ESXi5 and Infiniband http://www.vladan.fr/homelab-storage-network-speedup/

  • slayerizer

    Do you have any kind of backup? if the controller ends up dead or get corruption, you lose everything. Great article by the way..

    • Personally, yes, I do, although I do not back up everything. And you have this same problem with *any* NAS. This is why I always say RAID is not backup. It’s just storage with some drive redundancy. BTW: you can recover from a dead controller. Just pop in the same brand and model (I keep extras on hand), import the foreign configuration, and you’re off and running.
      For my most important documents, family photos, family videos, business papers, scans of birth certificates, and so forth, I have two Netgear 4-bay ReadyNAS+ with 4 x 4TB hard drives in each one using their proprietary RAID. The ReadyNAS+ can be setup to sync from one to another over the Net. I have one here at the house, and one in my data center where I have other stuff colocated. My SAN rsync’s those folders every 24 hours (just changed files) from the SAN over to the ReadyNAS, which then constantly syncs any changes off to the data center.
      I also have six 18GB Dropbox accounts (the extra size is through referrals, not anything I pay for, so those are all free each month) that I have various stuff synced off to.

      • slayerizer

        I agree and understand! I just wanted to know how you’re dealing with it. I’m currently using a very simple setup. Instead of buying a NAS, I built two servers with i3 and they both have their own JBOD lvm volume. I use rsnapshot (free) to get 7 days of changes replicated to my second server. I used JBOD because my drives are of different sizes. If I lose a drive I can rebuild my jbod from the other server. I’m running Plex on top one of the node. I have around 8tb on each side but 90% of the data is not critical (tv series, movies, …).

        • Most of my data is not critical either, although the amount that I do have is still large. The Netgear ReadyNAS+’s proprietary RAID allows you to do RAID and still have varying size drives. It uses the largest for parity and then just adds the others together and you can swap those out for larger drives and more capacity at any time. I picked up my 4-Bays for $175 each off eBay (the newest models) and it was well worth it for me. Much cheaper than say some i3 boxes, and much better power consumption. I’ve been very impressed with all they can do (I only use a bare fraction of their capability).
          I’ve used SnapRAID, UnRAID, FlexRAID, disParity, and Drive Bender also.
          Currently, I use a 3U 16-bay SuperMicro case with a LSI 9260-16i w/16 x 4TB Hitachi Ultrastars in a RAID6 driven by an i3-4360 (hyper-threader) and 16GB of RAM. It has a dual 8GB fibre card that ties into a 48-port gigabit switch that has two fibre ports for the SAN and then the house is wired with CAT6A throughout with a minimum of 4 ports per room. There’s also a 24 port 10/100 POE switch that drives all my security cameras and SIP phones (that connect out to my FreePBX box in the data center) as well as my weather station now that lives on separate VLANs.
          I’ve been busy, but I will have to update my system page soon and my network diagram, as it’s changed quite a bit.

          • Finally, I replaced all my XBMC physical boxes with Raspberry PI 2’s. The new quad cores are powerful enough to run OpenELEC without any lag whatsoever and play full 1080p video with 5.1 Dolby, and of course, the power requirement is silly low.

          • slayerizer

            I have one on order, will get a chance to play with it when I get back from Cuba. 🙂

          • Marcus Sattler

            Pi2’s are awesome with openelec. I run emby (formerly mediabrowser) on a VM, and then install EmbySync which syncs all of the Emby server information with content location, watched status, etc to the local Kodi DB. The Pi can also do 3D with no issues.

          • Yes, they are. Also, there is an image for the Pi2s that has a full OS full of nothing but VMWare Utilities, the command line stuff, etc, etc on it. We install these on RaspberryPis and send them to the data center and use them as “tool” servers to manage much of the utility work on our clusters and vCenters. You can read more about it here: http://xtravirt.com/product-information/vpi I’ll be doing an article on them later on.

          • Marcus Sattler

            The only thing it cannot do is 4k. Not a huge issue as I do not own a 4k TV yet, but I an capturing 4k video via my GoPro. Well, I have not tried it yet, but don’t expect it to work.

          • slayerizer

            I will look into it, but the i3 came from other machines I had. I didn’t buy them solely for server purpose at first. I got another one for my HTPC machine in my cinema room. But I dont use that machine often either. I normally install other softwares on my server (development related) so I try to stay away from NAS specific product. If I replicate my important data outside my home ( I should already do this, eheh) I could probably live with a single box and get two adapters (1 for spare). It’s complicated I also try to figure how I should set up my lab. I need to have a AD & Exchange lab environment & some linux boxes.

            Right now I have an old and x2 6000, two i3 (2ng gen) and one i7 3770k and iMac 2008 that I can use for my setup. I can still add more stuff but trying to figure out the best use..

            I have appletv boxes everywhere and I they are connected to my Plex server. The HTPC was there to play back bluray ISOs in 7.1 but I don’t do that often.. I may consider burning the movies on BR disc and re-use my htpc machine for server task.

            🙂

  • Paul Ansell

    Just tried an Asrock Extreme 3 R2.0 with both buffered and unbuffered ECC mobules (8GB, 4GB and 2GB) – refuses to boot. Flashed to the latest BIOS still no joy. Thought i’d point this out, might save someone else the aggro of returning a board etc.

  • I’m kind of confused and inexperienced if you could help me understand this. That motherboard has 5 sata ports, it looked like the raid card you had 4 ports? how do you connect 16 drives to 11 ports? I cant seem to find the adapter that allows you to hook up all of those drives. I’m sure it’s just some adapter, or wire hookup… but It would really help me figure out what I need for my own similiar build. Thanks btw this article has been really useful for organizing what all is needed.

    Oh and I dont seem to understand When you go to the windstar they recomend you tie their software right in with the hypervisor. The hypervisor is the software that gets loaded on the bare hardware (os) if you would of the esxi host.
    So when I see your build it makes me think that i’ll put something like 2008r2 for the os and of course set up the two raid configurations for this SAN. the os raid1, and raid 5 or 6 or 1+0 for the rest of the drives. Then install windstar software.. on the operating system. Can you point me in the direction of how i would make the esxi host know that the IScSI san storage is at this ip address of the SAN? OR do i really somehow install the software on the esxi host somehow?

    Thanks again,

  • Shyju Kanaprath

    Hi Don,
    Is the system still alive. Have faced any hardware issue after building this

  • Shyju Kanaprath

    Hi Don,
    Is the system still alive. Have you faced any hardware issue after building this

    • Not a single issue, and yep, it’ll still alive and kicking. It serves the media and holds backups for the house. Runs solid as a rock.

      • Shyju Kanaprath

        Thank you very much for the quick reply.

        • You’re welcome, and as an additional point, I used my favorite drive in this: Hitachi Ultrastars, and even though all of them were purchased used, I haven’t had to replace a single drive, and not a one of them reads under 98% on SMART values. Great drives.

  • Marcus Sattler

    Does the Starwind software support FC as well? I have found Emulex 2 Port 4GB/s for $16 a piece, and a Emulex 4GB/s switch for $80. If Starwind will not work for that is there something else I can use? I really like the thought of the 2 layer caching….. Thanks!

    • Sure. The FC is just looked at as another Ethernet port. The OS and driver pass it as a network device, and Starwind just uses it as that. There’s nothing special about it except the underlying hardware and cabling. This is why you can use Infiniband with it.

      • Marcus Sattler

        So then I would be looking at about 380MBps for each 4GB link right? Can ESXi multipath over 2 4GB FC connections? That would give me about 760MBps to each of two ESXi servers, and a total of 1520MBps. Still below the overall write numbers the SAN could allow for.

        • Multipath works as long as you’ve got two paths. I think you’re being a little generous with the bandwidth, but it won’t be much less. I’ve gotten away from fibre and much prefer Infiniband (the “old” cards you can pick up for a little of nothing have dual 20GB ports, while the new cards are dual 40GB ports). vMotion is sickeningly quick and SAN use is just stupid. You can push vMotion, Fault Tolerance, iSCSI, HA, and everything else you can think of through the same connection and it’s still reclining on a couch with natives fanning it with palm leaves. It does take more setup work, but like I said, Infiniband 20GB is dirt cheap … adapters are in the $20 range, and an Infiniband switch is in the $200 range. We’re using the dual 40GB setup in our company clusters and I use 20GB here at home.

  • Ralf Gebhard

    I’m currently running a 3-node Starwind Cluster (native SAN), each node exposes its own shared luns,

    Setup of each node:
    Supermicro X9DRW-3LN4F+ (4xGBE, 128GB RAM, 2x E5 2640)
    1xLSI 9265-8i
    4x SAS 1TB 7K ST91000640SS as RAID-0 = 4TB target
    2x SATA OCZVERTEX4 250GB as RAID-0 = 500 GB target
    1xLSI 9271-8i
    4x SAS 300GB 15K ST96300653SS as RAID-0 = 1TB target
    2xEmulex OCe11102 2-port 10GBE NICs for ISCSI and synchronizing the nodes.
    Each node connects to each other node for ISCSI multipath and Starwind synchronization, the ISCSI LUNS are divided for high I/O load (SQL), OS (15K drives) and data (7K drives). Overall it performs fine with no outages since 2 years running up to 15 Hyper-V VMs. I forgot to mention, OS is W2K12R2.
    redundancy for the iscsi targets is done by Starwind, so theoretically two nodes could fail without any problems (and I’ve tested it once).
    But now it comes to expansion 🙁

    For that I’m thinking about to change that “self-hosted” SAN to two dedicated storage chassis (running Starwind) and maybe replacing the 10GbE Emulex by Infiniband Controllers (after reading that thread).

    Any suggestions which raid controller to use best? New setup will have more SSDs (Enterprise SSDs) used as lvl2 cache for the slower 7K spindles, since only 2 storage servers will be used, data should be on raid10 volumes.

    • My assumption here is that you want to add drives? Or are you looking for a totally new RAID controller? My suggestion would be to go with a 9260-8i (these are great controllers and eBay is flooded with them) with a RAID expander (you can pick them up off eBay for $150 on good days, and I use them in almost EVERY build I do). This will take you from two RAID ports (8 drives) to four ports (16 drives). You could even daisy chain another raid expander in there to go more. Of course, you’ll reach that theoretical maximum on your PCI-e lanes, but really, for storage, a 9260-8i will give you plenty of throughput. Barring that, you can get a 9260-16i, but be ready to pay. A sample RAID expander is here: http://ebay.to/1In83cT

      • Ralf Gebhard

        Hi Don,
        yes,adding drives and moving to a “simpler” more performant setup. I’m also looking into sofs technology (smb direct rdma)with 2 redundant scaleout datastores.

        • Then I think the expander is a perfect choice, and is also economical. I’m running these with 16 bay SuperMicro chassis and CacheCade 2.0/Fastpath physical keys on the 9260-8is with 14 x 2TB Hitachi Ultrastars in a RAID6 and 2 x 512GB Crucial MX100s in a RAID1 as the CacheCade volume and getting awesome results. That’s for local storage. For Starwind servers, I use very similar setups, but don’t add the CacheCade physical key, and use the SSDs as Lvl2 cache. Several of these have been in service 18 months and I’ve not had to replace a single SSD as of yet and they all carry heavy, heavy workloads.

          • Ralf Gebhard

            thx a lot for sharing that, I’ll try to test the different setups against each other, e.g. new MS SMB direct (rdma) with LSI attached storage /Infiniband vs. Starwinds (non RDMA) ISCSI solution, curious about the results.

  • Marcus Sattler

    Can I get your thoughts/review on the following? Also that RAID card is still the best bang for the buck right? Was looking for other cards which could utilize larger drives but am not finding any.

    Hitachi 2TB drives 5 * $70 = $350
    Rosewill RSV-L4500 4U 15-Bay Server Chassis $98.00
    AMD FX 8320 Black Edition 3.5GHz AM3+ Boxed Processor (8 core) $119.99
    ASRock 970M Pro3 AM3 ATX AMD Motherboard (64GB ram)($64.99 ($49.99)
    LSI MegaRAID 84016E $80
    Corsair CX Series 750 Watt ATX/EPS Modular $82
    2 x Crucial Ballistix Sport 16GB Kit (8GBx2) DDR3 1600 MT/s (PC3-12800) $184.00
    Crucial BX100 120GB SATA 2.5 Inch 120GB $65.00

    $1043.98

    Infiniband

    Voltaire ISR-9024 ISR 9024 Grid Switch INFINIBAND 24x 4X SDR ports 10G 10 Gb $100
    3 x Supermicro AOC-STG-i2 Dual-Port 10GB PCIe x8 Network Controller Card ($55/per) $165

    $265

    $1308.98 for 6TB highly accessible Raid6 protected storage

    Thanks!

    • Marcus Sattler

      Looking at another post perhaps I should go with the 9260-8i, which can handle 8 drives, as well as larger drives too. Right?

      Thanks!

      • The above build looks good. The 84016E is still the best bang for the buck, but for not much more, you can grab a 9260-8i. It handles 6Gbps drives, and larger drives, plus, as I stated in my reply to Ralf below, you can pick up a RAID expander for $150 on eBay to expand it out to 16 drives, or even further if you want to keep daisy chaining. Plus, with the 9260-8i, you can always pick up a CacheCade/Fastpath hardware key out of China pretty cheap, and have that as part of your RAID setup. The link that I provided Ralf below is still good for the RAID expander http://ebay.to/1In83cT

        • Marcus Sattler

          Got it up and running, out of curiosity, I set the block size on the RAID controller to 1024. What block size should I use in Windows, just 64kb, right?

          What are you setting your cache to for PrimoCache?

          Thanks!

          • Block size gets a lot more discussion than it’s worth, especially for virtualization. With virtualization, your work load is VERY mixed, unlike a NAS that might store movies, for example, in which every single file is big, or for music, in which every single file is small. THEN setting your block size becomes a lot more important. 64kb is just fine for any varied work load. As for PrimoCache, I just set it to however large I can while keeping my total memory load at 80% or under.

          • Marcus Sattler

            Ya know, i bought a open rack as well to put this in, and the SAN server is the first rack mounted box I’m throwing in there. Below it, at the bottom of the rack is enough room for a couple external USB drives. I have 6TB of Raid 6 storage, and end up with 2 x 2TB LUNs, and 1 x 1.45TB LUN. Am thinking about throwing a couple of 5TB external drives under there, connect them via USB and just back up to them. Being that the LUN’s are files within Windows does leave a lot of flexibility for backing them up,.

          • Great idea, and similar to what I do. I consistently tell people: RAID is redundancy, but is NOT a backup. Always make your own backups. And considering that 5TB (and now 6TB) hard drives are available and fairly cheap compared to the cost of replacing your data (IF it can be replaced), there’s no reason you can’t have 12TB to 30TB of data just sitting in external HD’s underneath your rack or sitting on a 1U rack shelf.

          • Marcus Sattler

            So… I have this VM which I sized incorrectly a while back. Using the VSphere converter on the VM, and converting it to a new VM within my ESXi cluster. Currently the transfer rate is: 1.06 GB/s. Gotta love it.

            Waiting to get all my VM’s migrated over. I have a feeling iSCSI over the 4 x 1GB ethernet links I have will be fine. But for best numbers, I now see where Infiniband could be pretty sweet.

            My Media/movies is stored on a separate unRAID server with 30TB of storage, so I just don’t see myself needing more than the 4GB of throughput.

            Will see how it goes!

            BTW, here where my numbers. Kinda weird in how drastically different they were from yours. Did you use the SSD cache in Primocache as well, because I did.

            Using 5/100MB

            BEfore Cache:

            Seq Q32T1 1520 1481

            4K Q32T1 260.1 273.9

            Seq 1106 964.3

            4K 82.17 70.41

            After Cache:

            Seq Q32T1 1562 1856

            4K Q32T1 510.4 496.5

            Seq 3927 5335

            4K 379.1 365.8

  • Marcus Sattler

    Since this article has gotten so much attention, you may want to note PrimoCache is no longer free. The pricing for a license could sway some back to Windows 7/8/10, as they are not as expensive as the server version @ $120.

  • Shaibu Ali

    Great article! I’ve been looking for a solution like this for a while.
    Just a few very basic questions: 1. What OS did you install on the box?
    2. Do you run VMs off this box?
    3. When you virtualized your home PCs, what do you use to connect to the VMs remotely ? Thin client?

  • Jon David

    What are the groups thoughts of building something like this in a large corporate environment? Currently we are running an EMC VNX with about 26tbs. Large files that need to be accessed quickly, but I could build 100 of these for what I am paying in Hardware and Maint. Need to dedup, and replicate as well. Thoughts?

    • No problem with this in a production environment; I’m running 4 of these in two different data centers just as they are built here (even using the consumer grade motherboards and non-ECC RAM), and they are on their 3rd year of heavy usage without a single failure or fault (naturally a couple of drive replacements, but that’s it).
      Two run as backup SANs for Veeam Backup, one has 4 x 10GB NICs in it and runs as an iSCSI SAN for ESXi nodes, and the other has 40GB Infiniband cards and again serves as an iSCSI SAN.
      You might upgrade this setup to a 2011-v3 CPU and max it out at 64GB of RAM (some of the new consumer motherboards are accepting 16GB RAM sticks for 128GB of RAM), or upgrade the mobo to a server board running something like an E3-1270. Starwind’s virtual SAN includes deduplication and so on https://www.starwindsoftware.com/starwind-virtual-san
      Finally, remember that PrimoCache was simply a proof-of-concept build. Starwind’s SAN has their own RAM caching, as well as SSD caching and tiered storage, and all the goodies that come with it. Starwind would suffice on it’s own.

  • Boaty Mcboatface

    Once I have the cash I will build this thing

  • Boaty Mcboatface

    I would love to build one of these things, thank you for the tutorial

  • David Česal

    Today I would use ZFS with Intel NVMe SSD as a cache. The biggest problem is to make SAN high available – how to synchronize data between two drive arrays in different SANs.

    • I wouldn’t, but I stick with my better benchmarks, and it’s all personal preference. Highly available isn’t an issue, there are a number of ways to approach that easily, either open source or otherwise. For example, Starwind, as i mention in my blog in several places, does 2, 3 and more node high availability

      • David Česal

        It depends, and benchmarks are needed for all setups. Anyway, build a cheap SAN is easy. Build a HA SAN is not easy nor cheap. I can use Ceph, but it’s slow (with two nodes). I can use DRBD, but lose snapshots and easy zfs send/receive.. Not an easy task.