When I began this project the original goal was "reasonable performance for $500 per machine". That is turning out to be a bit of a challenge, especially since I decided not to cut corners on the rackmount chassis. There is nothing like working in a case for an hour and emerging with half a dozen cuts to your hands from rough edges to cost justify a clean well-made chassis. Further challenging the $500 bottom line was the desire to run either a dual core or dual CPU configuration.
Form Factor, Topology, etc.
Socket 939 is fading away, and my research showed me that prices for 939 gear were fading similarly. So as a money saving technique, I decided to actively seek out socket 939 hardware for this project. I also decided to focus on a good quality motherboard while not necessarily using a server motherboard...this may turn out to be a poor decision - we'll see once things are up and running. After reviewing the data sheets and specs on a number of motherboards I decided to use an ATX form factor.
Performance and Cost Considerations
I want good performance without breaking the bank. While a sweet dual core/dual core system with tons of memory and a massive SCSI array would make me smile, it would put the project way beyond budget. So here are the tradeoffs I made:
- Running a single dual-core CPU instead of dual CPUs with dual cores. This means that each box will only be 2-way instead of 4-way, then again with 4 nodes running a clustering tool like openMosix I will have an 8-way box which is still pretty cool.
- Have you noticed how pricey memory is lately? I'll start out with 1G per node, but make sure that my motherboard can support at least 4G for future expansion. Note to self: hoard memory later when it's cheap and make a killing on eBay when prices go up again.
- SAS or SCSI gives killer I/O performance but at a price. I'll build these machines with SATA-II devices in the 250-320G range, perhaps spending a little more for a larger on-device cache.
- My original plan was to build a blade server system, using DIY parts from a vendor like ATXBlade. But in analyzing the cost - $550 for blade storage unit, $325 for each blade chassis - I decided that I didn't really need to build a dense server farm. After all, I have a full height rack and will probably not build out enough systems to exceed the capacity of the rack.
After a significant amount of consideration, here is the parts manifest for each node:
- Motherboard: ASUS A8N5X Socket 939 NVIDIA nForce4 ATX AMD motherboard - retail packaging
- CPU: AMD Athlon 64 X2 4400+ (Toledo code) 2.2GHz processor - OEM packaging
- Memory: Kingston ValueRAM 1G 184 pin (PC3200) DDR400 memory - retail packaging
- Disk: Western Digital Caviar SE16 320G 7200RPM SATA 3.0Gb/s hard drive - OEM packaging
- CPU cooler: Thermaltake CL-P0257 "Blue Orb II" CPU cool for K8 - retail packaging
- Case: iStarUSA Storm Series D-200 Black 2U rackmount case - retail packaging
- Rails: iStarUSA TC-RSL-20 sliding rail kit for rackmount chassis - retail packaging
Motherboard selection was driven by the following selection criteria:
- Socket 939, ATX form factor
- Able to support AMD Athlon X2
- At least 4GB memory
- Support for at least 4 SATA-II devices
- On-board RAID support for RAID-0, RAID-1, RAID 0+1, and JBOD
- On-board video support
- Front Side Bus speed of at least 1000MgHz
- On-board gigabit network support
The up-to-speed reader will note that the motherboard I chose does not come with on-board video support. I noticed that too - AFTER I had ordered the motherboards. There is a whole story behind this that I'll write down later. There is also a question about SATA performance - some spec sheets state the motherboard is SATA-I (1.5Gb/sec) while other spec sheets state it's SATA-II (3.0Gb/sec). I think the board was rev'd at some point and this may have been part of the rev. At any rate, if it turns out to be SATA-I the I can still do some benchmarking and perhaps install a SATA-II card later.
No comments:
Post a Comment