4 GB RAM-based NAS

  • Thread starter Thread starter Shailesh Humbad
  • Start date Start date
Raid5 is not very fast writing, and slower reading. It's used for reliability
and the ability to make a file system that spans several disks.

Raid5 is faster when reading. Both for small and large files.
Raid1 (mirroring) is faster reading, and no change on writing (I think)

A little bit slower on writing, but nothing to write home about.
Raid0 (striping) is faster reading and writing

Actually I was thinking about RAID0+1
Faster reading and writing, but still with redundancy.

Raid 0+1 controllers are cheap. But it costs a more in harddisks, but
it $700 for that NAS solution isn't cheap either.

Marc
 
Cenatek:

http://www.cenatek.com/rocketdrive_specs.cfm

http://froogle.google.com/froogle?q=cenatek&btnG=Search+Froogle

1GB Cenatek Rocket Drive DL - $499.00
2GB Cenatek Rocket Drive DL - $659.00
4GB Cenatek Rocket Drive DL - $819.00
(Board only prices)

Supported SDRAM:
http://store.yahoo.com/cdrdvdrmedia/sdramspec.html

SuperSSD:

http://www.superssd.com/ramsan_systems_specs.htm

http://froogle.google.com/froogle?q=ramsan&btnG=Search+Froogle

The Cenatek card costs $819 plus at least $660 for four GB of RAM
going by the lowest price for 1GB SDRAM on pricewatch, so at least
$1480. The 4GB NAS I quoted comes out to about $700 *including RAM*,
because it uses 1GB DDR SDRAM DIMMs, which currently have one of the
lowest costs per megabyte.

The SuperSSD drives have the much faster 64-bit interfaces, but the
prices are astronomical for end-users.

You know, it would be possible with some additional software to
combine multiple 4GB NAS boxes into a software RAID, in order to
increase the capacity.

Shailesh
 
Cenatek:

http://www.cenatek.com/rocketdrive_specs.cfm

http://froogle.google.com/froogle?q=cenatek&btnG=Search+Froogle

1GB Cenatek Rocket Drive DL - $499.00
2GB Cenatek Rocket Drive DL - $659.00
4GB Cenatek Rocket Drive DL - $819.00
(Board only prices)

Supported SDRAM:
http://store.yahoo.com/cdrdvdrmedia/sdramspec.html

SuperSSD:

http://www.superssd.com/ramsan_systems_specs.htm

http://froogle.google.com/froogle?q=ramsan&btnG=Search+Froogle

The Cenatek card costs $819 plus at least $660 for four GB of RAM
going by the lowest price for 1GB SDRAM on pricewatch, so at least
$1480. The 4GB NAS I quoted comes out to about $700 *including RAM*,
because it uses 1GB DDR SDRAM DIMMs, which currently have one of the
lowest costs per megabyte.

The SuperSSD drives have the much faster 64-bit interfaces, but the
prices are astronomical for end-users.

You know, it would be possible with some additional software to
combine multiple 4GB NAS boxes into a software RAID, in order to
increase the capacity.

Shailesh

You could stripe a pair of them them for speed :-) (actually
the PCI bus would be the bottleneck unless you had a system
with two PCI busses.)
 
Previously Shailesh Humbad said:
The flash-based disks have high-latency (microsecond range), limited
write cycles (about 1 Million), and are very expensive. On the other
hand, they have higher capacity, ruggedness, and are non-volatile.

Gigabit-Ethernet has latencies in the usec range anyway. The issue
with overwrites depends. If it is flash-based, you are correct. However
there are also RAM+HDD+UPS products:

http://www.superssd.com/products/ramsan-320/index.htm

This is basically your idea + UPS + software that stores the data on
disk in case the power fails. Ocf course it is significantly more
expensive than your approach. On the other hand, it likely looks
better... ;-)=)
A 4-way RAID-0 would be a bit more expensive due to the extra cost of
the controller, it would have the same latency issue for random
operations (microsecond range), and take up more space and power. On
the other hand, it would have huge capacity, unlimited write cycles,
and be non-volatile.

More like milisecond latencies or reads. RAID-0 is good for throughput,
latency is not that much reduced, if at all.
I guess my idea of this 4GB RAM-based NAS was to get some high
performance, plug-n-play storage for the lowest possible cost. All
mainstream systems are going to be limited by the PCI 32-bit 33Mhz
bandwidth of 133 MB/sec (1,064 Gbits/sec), and systems with higher
bandwidth interfaces, like Opteron and Xeon, cost a lot more.
Photoshop, Paint Shop Pro, Pinnacle Movie Studio, and similar
photo/movie programs can use a fast temporary drive. But with a couple
Gigs in the system already, I don't know if an additional drive up to
4 GB is worth $600 bucks.
For what it's worth, I set up an old PC I have with FreeBSD, and
shared an 800MB RAM disk from it. It was neat, but since I only have
100Mbit ethernet, I could only get about 9 MB/sec sustained.

Don't get me wrong: The idea is not stupid. It just is not efficient
for most real world application because of the details. There is
nothing wrong with thinking about it.

Also for some algorithms you need very large very fast random
access tables with small entries. Your approach might be the
cheapest solution to e.g. get 64GB for such a table. Maybe
also do some of the calculations directly on the "NAS".
From there it is a small step to a cluster with large memory
and moderate CPU-power.

Arno
 
Previously Al Dykes said:
[...]

You could stripe a pair of them them for speed :-) (actually
the PCI bus would be the bottleneck unless you had a system
with two PCI busses.)

Maybe a Mainboard with PCI-X and several PCI-X to PCI bridges?

Arno
 
Marc said:
Marc de Vries wrote:

Uh, the limit is that the x86 architecture can directly address 4 gig.

[nitpicking mode on]
Actually, the limit is 32bit addressing in the CPU.
[nitpicking mode off]

2^32 = 4,294,967,296, which is close enough to "4 gig" for the purposes of
this discussion.
True. But still a lot better than a ramdrive connected at 125MB/s

Best option is of course a 64bit CPU, with a 64bit OS.
And hope that the application doesn't have 32bit limits built-in.

But I don't think the OP had a Opteron with a 64bit unix version in
mind when he wrote about his idea.

ISTR he was thinking about a Mac G5 at some point, which is running a 64-bit
processor already with a 64-bit BSD version and is socketed (in some models
anyway) for 8 gig. And of course Photoshop runs fine on Macs.
*grin*

That brings me back to my last remark.
What are you planning to do in GIMP that benefits from 8GB ram? :-)

True. Maybe working with the output from an 8x10 scanning back?
 
Marc said:
Raid5 is faster when reading. Both for small and large files.


A little bit slower on writing, but nothing to write home about.


Actually I was thinking about RAID0+1
Faster reading and writing, but still with redundancy.

Raid 0+1 controllers are cheap. But it costs a more in harddisks, but
it $700 for that NAS solution isn't cheap either.

RAID 0+1 controllers that outperform the striping/mirroring capabilities
built into the Windows server versions are not cheap--the cheap ones just
use soft mirroring with a boot rom that can bring the system up far enough
for the drivers to load.
 
I don't get what you've said is different from what I've said?

A poster I was replying to, apparently assumed that the total physical
memory (or even virtual) available to all applications is limited to 2 GB,
while the rest of the memory will only be used by a kernel.

I corrected him that every single process can get its own private 2 GB of
virtual space (the same as you've said), because other 2 GB are for OS use.

AllocateUserPhysicalPages Win32 API is available starting with Win2K Pro,
even though Win2KPro only supports up to 4 GB physical memory. Note that
this API requires "Lock pages in memory" privilege, which by default is
given only to Administrators group. So it's mot much of use to the user
applications, unless you want to give this privilege to users.

Windows 2000 Advanced Server is limited to 8 GB, and Windows 2000 Datacenter
Server is limited to 32 GB

See the following for more info:

http://support.microsoft.com/default.aspx?scid=KB;en-us;283037
 
I don't get what you've said is different from what I've said?

A poster I was replying to, apparently assumed that the total physical
memory (or even virtual) available to all applications is limited to 2 GB,
while the rest of the memory will only be used by a kernel.

Even if your PC has only 256MB memory, each and every process
(application) will be able to address 0-2**31 bytes (2GB) if it needs
to. Every ia32 chip has the ability to manage and map virtual memory
pages and work with the OS kernel to keep pages that have not been
used recently in a swap file. As the needed virtual pagesexpands and
exceeds physical memory your overall machine slows doen and the disk
that the swap space is on gets very busy.

google "NT swap page" to find lots of explainations.
 
Latency of Gigabit Ethernet:
http://www.accs.com/p_and_p/GigaBit/results_lmbench2.html

The slowest Gigabit Ethernet card has a latency of 400 microseconds,
or 0.4 milliseconds. So the NAS would have that latency or better.

My 4-disk 10k rpm RAID-5 has an latency of roughly 4 milliseconds, or
4000 microseconds, which is ten times worse.

Of course, the latency of system RAM is in the 10s of nanoseconds
range, or thousands of times faster, but system RAM is limited by
motherboards and OSes. The RAM in the NAS will be configured as a
disk drive, so there is no practical size limitation assuming NASes
can be combined, and any application that needs fast disk space can
benefit.

Also, from that link, the application test performance does not ever
reach 125MB/sec, and only gets to between 50-90% of that on the
various benchmarks, but that is still quite fast for disk space.

Maybe I will do an experiment and post the results here.
 
I suggested such a card could be built very cheaply, for system whose memory
is maxed out.
 
Al said:
Even if your PC has only 256MB memory, each and every process
(application) will be able to address 0-2**31 bytes (2GB) if it needs
to.

However if the process needs 2 gig and you have only 256 meg in the machine
you'll likely die of old age between one keystroke and the next.
 
Yes, it seems like it would only need a memory controller and some
kind of PCI bridge. I don't understand why its so hard to find.
Maybe some small hardware specialty shop carries it.
 
I have no idea why the cards are so expensive. It's just one chip and 1/2/4
DIMM sockets.

What is the design and productions cost of a custom PCI-DRAM ASIC?
 
However if the process needs 2 gig and you have only 256 meg in the machine
you'll likely die of old age between one keystroke and the next.

Wrong, more or less; "Needs" can be defined as the number of pages
addressed, but at any instant a typical program is actualy using only
a few pages, generally in a small loop. This is called the program's
"working set", and it changes over the life of the program's
execution. It's true that some programs have huge working sets, some
because of the nature of the problem, others because they are written
by amateurs. As long as the total of the working sets of all the
running processes is less than the physical RAM performance will be OK,
even if the sum of all the adress spaces vastly exceeds the physical
RAM. Google "working set" for lots of info. Many many PhD thesis
have been done on how programs behave in a paging environment,
since Atlas, in the 60's.

This covers the topic nicely;
http://en.wikipedia.org/wiki/Virtual_memory
 
Previously Shailesh Humbad said:
The slowest Gigabit Ethernet card has a latency of 400 microseconds,
or 0.4 milliseconds. So the NAS would have that latency or better.

Only if you do not have a switch in between. Also I see up to 500usec
for UDP and up to 900usec for TCP. In addition, I don't know whether
you get machines in the same class for the 700USD quoted in the OP.
My 4-disk 10k rpm RAID-5 has an latency of roughly 4 milliseconds, or
4000 microseconds, which is ten times worse.

Yes. But latency is just one of the factors that influences performance.
It is wrong to assume that a latency of 1/10 will increase performance by
a factor of 10. In addition you need to measure only in the first 4 GB
of the RAID for a fairt comparison.

Arno
 
Only if you do not have a switch in between. Also I see up to 500usec
for UDP and up to 900usec for TCP. In addition, I don't know whether
you get machines in the same class for the 700USD quoted in the OP.
Yes. But latency is just one of the factors that influences performance.
It is wrong to assume that a latency of 1/10 will increase performance by
a factor of 10. In addition you need to measure only in the first 4 GB
of the RAID for a fairt comparison.
True. I had not made that assumption. Even if it is 1 ms for Gigabit
Ethernet--which is a fair estimate--it is still much better than the
average seek time of any hard disk. If limited to the inner cylinders
of the disk, maybe the latency will be comparable. I think the
network latency will be much better than 1 ms in a typical LAN though.
Also, if you put a hard disk RAID on a network, you will still have
the same network delay.

I just bought two SMC Gigabit ethernet cards for $9 each. When they
come in, I will run a disk benchmark with a RAM drive on a second
computer mounted via Samba to a WinXP machine, then we can see some
real numbers.

Shailesh
 
Previously Shailesh Humbad said:
True. I had not made that assumption. Even if it is 1 ms for Gigabit
Ethernet--which is a fair estimate--it is still much better than the
average seek time of any hard disk. If limited to the inner cylinders
of the disk, maybe the latency will be comparable. I think the
network latency will be much better than 1 ms in a typical LAN though.
Also, if you put a hard disk RAID on a network, you will still have
the same network delay.
I just bought two SMC Gigabit ethernet cards for $9 each. When they
come in, I will run a disk benchmark with a RAM drive on a second
computer mounted via Samba to a WinXP machine, then we can see some
real numbers.

Nice. Post the results when you have them. Mind, that _I_ am not
interessted in any Windows NAS, since I think that using MS
products for anything important is unpprofessional, but the
performance figures will be interessting nonetheless.

Arno
 
Back
Top