"kony" said in news:
[email protected]:
This solution will be slow, costly, relatively large, and IF you got
it working then all you'd have is a limited-write-cycle shared drive.
If THAT is all you wanted, you're still better off to just use
low-height PCI network cards. They'd be as fast, less expensive, and
meant for exactly this use so the system(s) are up and working right
away, not so many months later than it'd be more time and cost
effective to just buy a new motherboard and CPU.
Yeah, and with everything onboard (video, network controllers, IDE or
SATA, sound, and USB) you could stack the mobos very close and use the
network controllers with a switch hub to connect them all. Some onboard
network controllers can go to 100 GB so you have the 75% of the PCI's
bandwidth. You'd have the solution NOW plus you would be using standard
protocols to communicate rather than have to make your own proprietary
ones.
In fact, seems like there are solutions already like this where you get
rack mounted computers and switches. Get a rack-mounted case, like
http://www.gtweb.net/j1100.html (there are smaller units that don't have
the 3.5" and 5.25" ext. drive bays if you don't need a floppy and CD/DVD
drive), to put in your motherboard with the onboard video and Gigabyte
LAN, get a rack-mounted switch, probably a large enough UPS to handle
all the power (rack-mounted optional), the rack, of course, and you're
off with a custom-built multiple system computing center. However, rack
systems are always expensive.