I believe aaccli *can* be scripted. You merely supply the commands
I tried that. It works. But the output is mostly unusable,
since it still contains vt100 control sequences. And
of course AACCLI does _not_ support terminal type "dumb".
another interesting point on your part
By the output of ldd, it seems to me that aaccli relies on ncurses.
I believe that the handling of terminal types is done by ncurses,
rather than by aaccli directly.
It's indeed a deficiency in aaccli that it can't recognize raw
file/pipe on the output and avoid curses altogether, the way some
better behaved software does it.
You made me try this on my own.
I've made a simple script containing just "open aac0" and "exit".
If I ran "aaccli <sample.scp >output.txt" under TERM=linux, i could
indeed see some escape sequences in the output.
If I did "export TERM=dumb" though, the output seemd to lack all
escape sequences, except for the terminal-style EOLs (just CR == \r).
Nonetheless, it seems that Curses translate a "clear screen" into
several hundred space characters (0x20) ahead of useful data, so there
*is* some formatting to crunch, if you want to process this output
by Perl or something.
Maybe I should try some more complex aaccli commands to learn
how messy this can get
Backspace characters and such...
Not really an issue. It is needed only on writing, or reading
from a degraded array. It is difficult to really measure how
much effort goes into the RAID, but when I stream data to
an 8 disk RAID5 array on a dual Athlon 2800+ box, it takes
overall less than 50% of one CPU and gets 35MB/s real, sustained
speed. Could possibly be faster, but I did not invest much time
into tuning, since it is mostly read from. I suspect the bottleneck
is the PCI switching between the different controllers (4, since
each TX4 is essentially 2 controller son one chip).
thanks for that data, this is perhaps the first time I've seen
a real-world benchmark of this kind posted by anyone.
Then again I didn't try too hard to find one
In my experience, various RAID5 controllers based on the
Intel IOP302/303 can do about 50 MBps sustained sequential
writes, averaged over 2 seconds (so the peak rate may actually
be much higher). The newer solutions based on IOP321/331 are
perhaps 2x or 3x as fast.
RAID0 transfer rates are yet somewhat faster, so indeed the RAID5
crunching is a bit of a bottleneck, rather than the PCI bus.
Regarding the "PCI switching between different controllers":
These IOP chips are also using PCI as their primary and secondary
bus (dual-ported design) - the IOP302/303 have PCI66@64, the
IOP321/331
have PCI-X@133. That's 533 MBps or 1.066 GBps respectively of
theoretical
bandwidth. Compare that to 50 or 150 MBps.
The SCSI or SATA bus controller on a RAID card is usually a single
chip. This design style saves package count, up to 8-channel
controllers
are available. Sometimes two chips - it appears that there's no other
way
to achieve 16 SATA channels.
I don't believe that multiplexing PCI transactions among two or four
discrete controller chips on a PCI bus would add some overhead
compared to multiplexing different channels in a multi-channel
controller IC.
My impression is that different channels, even in a single controller
chip, tend to have separate "register footprints" in the PCI bus
address space, separate state machines etc. Even if not, you can't
merge tranfers from two different channels into a single PCI
bus-master transaction, so you have to do the whole transaction setup
"per channel" anyway, and the different channels have to arbitrate PCI
DMA concurrently anyway.
Different IRQ sharing setups are possible either way - there are no
straight implications.
==> There's no principal difference between a single-chip controller
and four separate chips. In this respect, a PC system is not much
different from an IOP-based RAID controller.
I guess I've wandered way off topic here
I'm running a software-based mirror (under Linux 2.4 => "md" devices)
on a small fileserver and I have to say I'm completely satisfied.
Frank Rysanek