error of cyclic redundancy on BIG sound recording program : savingfail offently !

  • Thread starter Thread starter Setup.exe
  • Start date Start date
S

Setup.exe

Hi,


Got a big big problem :
I'm working on FM radio Station.
We use a small german prog to record 15 hours loops, infinitly, so that
we can save our work if needed - if something good, live, happens ....
Here is the prog : http://www.looprecorder.de/

This app uses a special method of disk writing :
is creates a TEMP folder with 8 x 500 of small wav files (for 15 hours,
thats it) ..

The problem is when we save, sometimes the app crashes, saying
"temporary file too short" or sometimes "data chunk truncated".. And if
i analyse the temp, I see that just 3 errors in the 4000 small files is
enough to destroy all the archive saving process ..

Thats awfull !!!

All the more, its just impossible to have any answer from the authors of
the app : they NEVER write, i try to reach them since years ...

I use a 150 GB IDE drive dedicated for the temp of the prog, and I
offently do a chkdsk /f on it : its shows no errors, besides it seems to
repair some small errors (It brings less errors in trying to copy / cut
the temp to rebuild a part of the lost recordings)

I'm confused.
So, i just had the idea of re-format the drive with a larger cluster
size, and thats my question : is it a good thing for a HEAVY disk
writing process ? - don't forget the prog is running 24/7/365

What other "drive solution" should i adopt ?
Take a sata drive ? A special drive ? An SSD ?
I really don't know.
Hope someone can help ...

Thanks in advance

Julien
 
Setup.exe said:
Hi,

Got a big big problem :
I'm working on FM radio Station.
We use a small german prog to record 15 hours loops, infinitly, so that we
can save our work if needed - if something good, live, happens ....
Here is the prog : http://www.looprecorder.de/

This app uses a special method of disk writing :
is creates a TEMP folder with 8 x 500 of small wav files (for 15 hours,
thats it) ..

The problem is when we save, sometimes the app crashes, saying "temporary
file too short" or sometimes "data chunk truncated".. And if i analyse the
temp, I see that just 3 errors in the 4000 small files is enough to
destroy all the archive saving process ..

Thats awfull !!!

All the more, its just impossible to have any answer from the authors of
the app : they NEVER write, i try to reach them since years ...

You could try asking for your money back, that might get a response.

I would try sending a polite email to every email address you can find on
their website, asking for assistance.
I use a 150 GB IDE drive dedicated for the temp of the prog, and I
offently do a chkdsk /f on it : its shows no errors, besides it seems to
repair some small errors (It brings less errors in trying to copy / cut
the temp to rebuild a part of the lost recordings)

Whilst chkdsk is good, you can often get an idea of whether there might be
disk related errors by looking in the System event log - run up event viewer
and them skim through the System event log. Ignore information and warning
messages, and (for now) any errors that aren't disk related. I suspect there
won't be but if there are any disk related errors recorded then that would
indicate a hardware related issue somewhere.

The next time you get an error, go straight to the system event log and have
a look to see if anything has been reported there.
I'm confused.
So, i just had the idea of re-format the drive with a larger cluster size,
and thats my question : is it a good thing for a HEAVY disk writing
process ? - don't forget the prog is running 24/7/365

Whilst I haven't use this software or experienced these specific errors, I
have encounted odd errors before now with a variety of applications which
generate files and which are long running. This might be a red-herring, but
I wonder whether disk fragmentation might be an issue - either disk
fragmentation or directory fragmentation. Given that the individual files
are probably quite small (4MB ish?) then I doubt file fragmentation is an
issue, but I wonder whether directory fragmentation might be a problem?
Could you download contig from
http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx and then at
the command line do

contig <temp-path>
and, just for completeness
contig -s <temp-path>

The first will defrag the directory entry, the second the files in it. Most
normal disk defragmenters won't defragment the directory entry, whereas
contig will. I've seen fragmentation cause quite a few products to fail, and
it can be hard to track down - but admitedly only when its dealing with a
very large very fragmented file and not lots of small individual ones.
What other "drive solution" should i adopt ?
Take a sata drive ? A special drive ? An SSD ?
I really don't know.
Hope someone can help ...

If you do get it sorted please post back what the issue and resolution were.
 
Le 6/29/2012 4:02 PM, Brian Cryer a écrit :
You could try asking for your money back, that might get a response.

I would try sending a polite email to every email address you can find
on their website, asking for assistance.

I did that already, they are deaf.
Even worst, obviously their prog is totally out of date, perhaps its the
reason its fails. There is no recent upgrade. Some windows in the prog
just refer to Win98 system files, and of course that works not.

Besides, its a mess cause the app could be (or is) excellent, and i have
never seen such an equivalent
Whilst chkdsk is good, you can often get an idea of whether there might
be disk related errors by looking in the System event log - run up event
viewer and them skim through the System event log. Ignore information
and warning messages, and (for now) any errors that aren't disk related.
I suspect there won't be but if there are any disk related errors
recorded then that would indicate a hardware related issue somewhere.

The next time you get an error, go straight to the system event log and
have a look to see if anything has been reported there.

Oh yes : tons of RED disk error : " Device bla bla has a bad bloc"
(Event 7) Ok, but when running CHKDSK, it says all is ok : no bad
sector. Would the term bad bloc" be different from "bad sector" and not
advised by the check disk log result ???
Whilst I haven't use this software or experienced these specific errors,
I have encounted odd errors before now with a variety of applications
which generate files and which are long running. This might be a
red-herring,

red-herring : what that means ?

but I wonder whether disk fragmentation might be an issue -
either disk fragmentation or directory fragmentation. Given that the
individual files are probably quite small (4MB ish?)

Even smaller : 512 kb !


then I doubt file
fragmentation is an issue, but I wonder whether directory fragmentation
might be a problem? Could you download contig from
http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx and then
at the command line do

contig <temp-path>
and, just for completeness
contig -s <temp-path>

The first will defrag the directory entry, the second the files in it.
Most normal disk defragmenters won't defragment the directory entry,
whereas contig will. I've seen fragmentation cause quite a few products
to fail, and it can be hard to track down - but admitedly only when its
dealing with a very large very fragmented file and not lots of small
individual ones.

As the disk is always writing (recording in fact), I'm not sure its good
the defragment it at the same time ... But I'll make my mind about this
test.
If you do get it sorted please post back what the issue and resolution
were.

Ok, thanks for the reply.
I first have to check this "bad bloc" errors ...
 
Setup.exe said:
Hi,

Got a big big problem :
I'm working on FM radio Station.
We use a small german prog to record 15 hours loops, infinitly, so that
we can save our work if needed - if something good, live, happens ....
Here is the prog : http://www.looprecorder.de/

This app uses a special method of disk writing :
is creates a TEMP folder with 8 x 500 of small wav files (for 15 hours,
thats it) ..

The problem is when we save, sometimes the app crashes, saying
"temporary file too short" or sometimes "data chunk truncated".. And if
i analyse the temp, I see that just 3 errors in the 4000 small files is
enough to destroy all the archive saving process ..

Thats awfull !!!

All the more, its just impossible to have any answer from the authors of
the app : they NEVER write, i try to reach them since years ...

I use a 150 GB IDE drive dedicated for the temp of the prog, and I
offently do a chkdsk /f on it : its shows no errors, besides it seems to
repair some small errors (It brings less errors in trying to copy / cut
the temp to rebuild a part of the lost recordings)

I'm confused.
So, i just had the idea of re-format the drive with a larger cluster
size, and thats my question : is it a good thing for a HEAVY disk
writing process ? - don't forget the prog is running 24/7/365

What other "drive solution" should i adopt ?
Take a sata drive ? A special drive ? An SSD ?
I really don't know.
Hope someone can help ...

Thanks in advance

Julien

From their web site:

Loop Recorder has 64bit support to handle recorded audio data of up to
16 GB.

Loop Recorder Pro can handle even larger amounts of audio data. With a
FAT32-filesystem it can record audio data of up to 256 GB per session
and with NTFS it is unlimited.

So you must be using their Standard version and not their Pro version.
Or have you been indefinitely using their free trial version which does
not include support?

The Pro version has the archive function, not the Standard version (and
very probably not the trial version).

Changing cluster size won't affect a defect in the 'Save' algorithm in
the program to close any files on which it has a handle. From
http://www.looprecorder.de/tut_radio.php, it appears you don't just save
the recording but instead you open an editor and from there you do a
save function. http://www.looprecorder.de/tut_editor.php shows the
editor that should open when you click "Edit and Save". Are you doing
any editing before you click the Save button (disk icon in toolbar)?

Are you clicking on the "Edit and Save" button in the Loop Recording
dialog or are you instead bringing up the editor and saving from there
without first stopping the looped recording? Presumably you must first
stop recording before you start editing and then save.

What "3 errors" are you talking about (in the temp folder with the 4000
files)? What was this "analysis" you mention?

Do you really need to generate 4000 files for 15 hours of recording?
From their product description, it looks like you can create some huge-
sized audio files the limit on size depending on which file system you
use for the drive. http://www.looprecorder.de/tut_diskusage.php shows
file sizes up to about 300GB which is obviously a lot larger than your
"small files" comment. How big is your loop? Are you leaving it at the
small default of 10 minutes? Or did you up it to hours? I don't see
the program is restricted from going up to your 15 hour window but then
I cannot see the parameters you can specify when you click on Change
button for Time Settings in the Loop Recorder dialog (shown at
http://www.looprecorder.de/tut_radio.php). You aren't hitting a maximum
file count limit per folder in the file system but you might be hitting
a maximum object count in the program.

Since you are talking about capturing "live" content then you're talking
about people talking on your show (you don't mention what your show airs
so maybe "live" content is local talent coming into your studio to play
their music). If it's just talking you want to capture, you could lower
the bitrate to reduce the size of the audio file(s). Their diskusage
web page show how much difference there is in size the audio files with
different bitrates. You never mentioned how long is the actual loop
that you are recording but having 4000 audio files sure makes it appear
you chose a small loop size. My guess is that each loop is generating
an audio file. So make the loops longer. If you think the audio files
are then getting too big then reduce the bitrate for capture.

Well, it's possible you're hitting a max file count per folder depending
on what file system you are using on the drive but which you never
mentioned. http://en.wikipedia.org/wiki/Fat32 has a table that shows
the maximum number of files (per folder). If you using the old FAT12
file system, the max is 4068 files per folder. You didn't even mention
which operating system you are using. So what OS and what file
system(s) within it are you using?

Are you running an anti-virus program or anything else that interrogates
the files on your computer? If so, have you tried excluding this
program's "temp" folder?

You tried all their contacts at http://www.looprecorder.de/email.php,
even the postal address (by sending your letter with signature
confirmation to prove they got it), and they didn't reply? Support
should be included in the paid product. You did pay for it, right, and
you're not just indefinitely using their trial product, right?
Personally I get suspicious of any company proliferating commercialware
that hides behind a private domain registration (their registrar is
listed as the registrant instead of the real registrant) to hide their
identity. looprecorder.de's registration shows their registrar (1and1)
as the registrant. Registrants pay extra to hide.
 
Setup.exe said:
Le 6/29/2012 4:02 PM, Brian Cryer a écrit :

I did that already, they are deaf.
Even worst, obviously their prog is totally out of date, perhaps its the
reason its fails. There is no recent upgrade. Some windows in the prog
just refer to Win98 system files, and of course that works not.

Besides, its a mess cause the app could be (or is) excellent, and i have
never seen such an equivalent


Oh yes : tons of RED disk error : " Device bla bla has a bad bloc"
(Event 7) Ok, but when running CHKDSK, it says all is ok : no bad
sector. Would the term bad bloc" be different from "bad sector" and not
advised by the check disk log result ???


red-herring : what that means ?

but I wonder whether disk fragmentation might be an issue -

Even smaller : 512 kb !


then I doubt file

As the disk is always writing (recording in fact), I'm not sure its good
the defragment it at the same time ... But I'll make my mind about this
test.

Ok, thanks for the reply.
I first have to check this "bad bloc" errors ...

There is an article here for "bad block" Event 7.

http://www.symantec.com/business/support/index?page=content&id=TECH16938

The Sense Code is in the numeric information stored in the Event. You
convert the numbers, into a text string, to gain a better understanding
of the error type.

*******

Download HDTune (version is good enough for this purpose)

http://www.hdtune.com/files/hdtune_255.exe

Install it, and then run it. Select the recording disk drive from the menu.

Then, click the "Health" tab.

The disk drive "Health" is reported by the SMART statistics system
on the hard drive.

http://en.wikipedia.org/wiki/S.M.A.R.T.

The display may already indicate (by red/yellow/green colors), there is a
problem. Some of the yellow indicators are not truthful - do not panic
immediately, if the display has some colors. Even my brand new drives,
have yellow entries that should be ignored.

Check "Reallocated Sector Count" first. The first line of numbers, is
from a brand new Seagate hard drive. The second line of numbers, is
from my failing Seagate 500GB drive. The second line is meant to indicate
what a degradation of the hard drive looks like. First, the Data value
rises, and when it gets high enough, the columns on the left start
to decrement. The left hand column is an indication of useful life, as
in there is "98 %" of life in terms of reallocation of sectors.

Current Worst Threshold Data Status

100 100 36 0 OK

98 98 36 104 OK

It would seem, in this case, that "Data" is the raw count. The columns
on the left, changed from 100 to 98, the day that Data passed the 100
mark. I interpret this trend to mean that the "total life" of the drive
is roughly (100-36)/(100-98) * 104 = 3328 reallocated sectors. My disk
is still "OK" as far as the status indicator is concerned, and the color
will change when the "Data" column hits greater than roughly 3200. It
took a bad drive, for me to get a demonstration of how it works.

The second indication is "Current Pending Sector". The status here, has
not changed for me, since I purchased the drive. Current Pending Sector
is a queue of sectors that need correction by the controller card. It
seems on my drive, that the queue never grows, and when a problem
occurs on the drive (on write), it is fixed immediately. This queue
is supposed to grow, when a read failure occurs, and the sector is
scheduled for repair, on the next write operation. The "Data" column
will return to zero, when the sector is processed and reallocated
or not. A write attempt for the sector is needed, before it can
be repaired. A sudden increase in "Data" here, could happen just
before the "Reallocated" starts to grow.

Current Worst Threshold Data Status

100 100 0 0 OK

In terms of health of the drive, and safety of the data, I replace
the hard drive, as soon as "Reallocated Sector Count" and the
Data column value is no longer 0. My drive still has not failed,
and has been running for about a week after the Data column showed
a problem.

But hard drives can cease to function, very quickly, so the problem
should not be ignored. Make a backup copy of the contents of the
hard drive, on a second disk somewhere. Like you would, for normal
backup procedures for the computer. Then, if the drive fails completely,
you can restore anything you need.

There is a "level of dishonesty" about "Reallocated Sector Count".
When the drive leaves the factory, there can be 500,000 reallocated
sectors on the drive. And the "Data" column would show 0. The manufacturer
does not want you to know, about the level of factory defects.
So the statistic is skewed, and is likely not completely linear.
As users, we cannot tell, whether my example of "104" above, is
104 actual sectors, or some other number (i.e. scaled math).

*******

If the disk drive has a problem, then the software designers at
looprecorder.de are not guilty. They cannot assume a defective drive,
and use QuickPar to try to improve the error rate. It's not a valid
design objective. It would be a valid objective, for a space craft
flying through space, where storage devices could fail while the
space craft is in flight. Here on Earth, we replace disk drives
when they become defective. I have replaced my disk drive, within
the last week, because of problems.

When "Reallocated Sector Count" grows, the peak write rate of the
hard drive, will fall. Performance will be "choppy". If the drive
takes 15 seconds to complete a write operation, because of bad sectors,
recording samples from looprecorder could be lost.

*******

If you wish to test another recording application, there is
Audacity from audacity.sourceforge.net.

http://audacity.sourceforge.net/

It can be configured, to write sound samples into files, for
later analysis. If you suspect looprecorder is not functioning
well, that might be a free alternative.

*******

The Windows built-in Sound Recorder, stores recorded sound in
system RAM. And the recording duration is limited by available
RAM.

Other recording programs, may initially store sound in RAM,
and then transfer blocks of data into files on the file system.
There should in fact, be plenty of time to resolve write problems
to the disk, due to the large buffer space provided by system
RAM. But it could be, that an error code, returned after a 15 second
attempt to write, causes the recording program to throw away
that segment of recorded data. And then, you see a corresponding
Event Viewer entry, for the failure that the operating system logged.

Summary: Get a new hard drive.
Replace the existing drive, before it fails.
You can use the old drive as a backup device, but knowing
it could fail at any time, and cannot be "trusted".

Paul
 
Hi,


Got a big big problem :
I'm working on FM radio Station.
We use a small german prog to record 15 hours loops,
infinitly, so that we can save our work if needed - if
something good, live, happens ....
Here is the prog : http://www.looprecorder.de/

This app uses a special method of disk writing :
is creates a TEMP folder with 8 x 500 of small wav files
(for 15 hours, thats it) ..

The problem is when we save, sometimes the app crashes,
saying "temporary file too short" or sometimes "data chunk
truncated".. And if i analyse the temp, I see that just 3
errors in the 4000 small files is enough to destroy all the
archive saving process ..

Thats awfull !!!

All the more, its just impossible to have any answer from
the authors of the app : they NEVER write, i try to reach
them since years ...

I use a 150 GB IDE drive dedicated for the temp of the prog,
and I offently do a chkdsk /f on it : its shows no errors,
besides it seems to repair some small errors (It brings less
errors in trying to copy / cut the temp to rebuild a part of
the lost recordings)

I'm confused.
So, i just had the idea of re-format the drive with a larger
cluster size, and thats my question : is it a good thing for
a HEAVY disk writing process ? - don't forget the prog is
running 24/7/365

What other "drive solution" should i adopt ?.
Take a sata drive ? A special drive ? An SSD ?
I really don't know.
Hope someone can help ...

Thanks in advance

Julien
You really should be running a better checking program than
chkdsk. I would suggest SpinRite. If there are any disk
problems it will find them.
http://www.grc.com/sr/spinrite.htm
 
There is an article here for "bad block" Event 7.

http://www.symantec.com/business/support/index?page=content&id=TECH16938

The Sense Code is in the numeric information stored in the Event. You
convert the numbers, into a text string, to gain a better understanding
of the error type.

*******

Download HDTune (version is good enough for this purpose)

http://www.hdtune.com/files/hdtune_255.exe

Install it, and then run it. Select the recording disk drive from the menu.

Then, click the "Health" tab.

The disk drive "Health" is reported by the SMART statistics system
on the hard drive.

http://en.wikipedia.org/wiki/S.M.A.R.T.

The display may already indicate (by red/yellow/green colors), there is a
problem. Some of the yellow indicators are not truthful - do not panic
immediately, if the display has some colors. Even my brand new drives,
have yellow entries that should be ignored.

Check "Reallocated Sector Count" first. The first line of numbers, is
from a brand new Seagate hard drive. The second line of numbers, is
from my failing Seagate 500GB drive. The second line is meant to indicate
what a degradation of the hard drive looks like. First, the Data value
rises, and when it gets high enough, the columns on the left start
to decrement. The left hand column is an indication of useful life, as
in there is "98 %" of life in terms of reallocation of sectors.

Current Worst Threshold Data Status

100 100 36 0 OK

98 98 36 104 OK

It would seem, in this case, that "Data" is the raw count. The columns
on the left, changed from 100 to 98, the day that Data passed the 100
mark. I interpret this trend to mean that the "total life" of the drive
is roughly (100-36)/(100-98) * 104 = 3328 reallocated sectors. My disk
is still "OK" as far as the status indicator is concerned, and the color
will change when the "Data" column hits greater than roughly 3200. It
took a bad drive, for me to get a demonstration of how it works.

The second indication is "Current Pending Sector". The status here, has
not changed for me, since I purchased the drive. Current Pending Sector
is a queue of sectors that need correction by the controller card. It
seems on my drive, that the queue never grows, and when a problem
occurs on the drive (on write), it is fixed immediately. This queue
is supposed to grow, when a read failure occurs, and the sector is
scheduled for repair, on the next write operation. The "Data" column
will return to zero, when the sector is processed and reallocated
or not. A write attempt for the sector is needed, before it can
be repaired. A sudden increase in "Data" here, could happen just
before the "Reallocated" starts to grow.

Current Worst Threshold Data Status

100 100 0 0 OK

In terms of health of the drive, and safety of the data, I replace
the hard drive, as soon as "Reallocated Sector Count" and the
Data column value is no longer 0. My drive still has not failed,
and has been running for about a week after the Data column showed
a problem.

But hard drives can cease to function, very quickly, so the problem
should not be ignored. Make a backup copy of the contents of the
hard drive, on a second disk somewhere. Like you would, for normal
backup procedures for the computer. Then, if the drive fails completely,
you can restore anything you need.

There is no back up to do, as the entire drive is dedicated to the temp
of the Loop Recorder. There is no other data in it, saves are made on
another drive.

There is a "level of dishonesty" about "Reallocated Sector Count".
When the drive leaves the factory, there can be 500,000 reallocated
sectors on the drive. And the "Data" column would show 0. The manufacturer
does not want you to know, about the level of factory defects.
So the statistic is skewed, and is likely not completely linear.
As users, we cannot tell, whether my example of "104" above, is
104 actual sectors, or some other number (i.e. scaled math).

*******

If the disk drive has a problem, then the software designers at
looprecorder.de are not guilty. They cannot assume a defective drive,
and use QuickPar to try to improve the error rate.

Of course, yes.


It's not a valid
design objective. It would be a valid objective, for a space craft
flying through space, where storage devices could fail while the
space craft is in flight. Here on Earth, we replace disk drives
when they become defective. I have replaced my disk drive, within
the last week, because of problems.

When "Reallocated Sector Count" grows, the peak write rate of the
hard drive, will fall. Performance will be "choppy". If the drive
takes 15 seconds to complete a write operation, because of bad sectors,
recording samples from looprecorder could be lost.

*******

If you wish to test another recording application, there is
Audacity from audacity.sourceforge.net.

http://audacity.sourceforge.net/

It can be configured, to write sound samples into files, for
later analysis. If you suspect looprecorder is not functioning
well, that might be a free alternative.

We absolutly need an automation recording in loop mode, and 15 hours
during is a minimum. Not sure its easy to get that from other software.

*******

The Windows built-in Sound Recorder, stores recorded sound in
system RAM. And the recording duration is limited by available
RAM.

Other recording programs, may initially store sound in RAM,
and then transfer blocks of data into files on the file system.
There should in fact, be plenty of time to resolve write problems
to the disk, due to the large buffer space provided by system
RAM. But it could be, that an error code, returned after a 15 second
attempt to write, causes the recording program to throw away
that segment of recorded data. And then, you see a corresponding
Event Viewer entry, for the failure that the operating system logged.

Summary: Get a new hard drive.
Replace the existing drive, before it fails.
You can use the old drive as a backup device, but knowing
it could fail at any time, and cannot be "trusted".

Paul

Woh, many thanks for this significative response. Got to read it again
as I m not exactly english thinking and poor in maths understanding !

But, well, I knew already HD Tune and what is the SMART, so i did the
readings, here is what it looks like :
http://nobody4.free.fr/ZVRAC/HD_TUNE.jpg
 
Setup.exe said:
Woh, many thanks for this significative response. Got to read it again
as I m not exactly english thinking and poor in maths understanding !

But, well, I knew already HD Tune and what is the SMART, so i did the
readings, here is what it looks like :
http://nobody4.free.fr/ZVRAC/HD_TUNE.jpg

Very interesting.

Some observations:

1) Your hard drive is too hot!

58C will ruin the hard drive.

Fit forced air cooling near the drive.

The best way to do this, is mount a fan capable of drawing cool
room air into the computer case, and have it blow directly
over the surface of the hard drive.

High temperature operation like that, will shorten the lifetime
of the hard drive motor. The lubricant inside the sealed motor,
will be forced out of the motor.

2) Reallocated sectors = 1, is not a bad number.

3) Current Pending Sectors count = 6 is more interesting.
It seems, for whatever reason, your drive may be having more problems
reading the data back later. My failing drive doesn't do that.
The thing is, since you're "loop recording", there should be many
opportunities to reduce the Current Pending Sector count, and
increase the Reallocated Sectors. But your Reallocated Sectors
is not grown significantly.

I do not know how to interpret this information, except to suggest
the high operating temperature is partially responsible. Cool off the drive.

4) Power on Hours is 14819. That's roughly 617 days of continuous
24 hour operation. Most of my drives, don't last that long here.

The customer reviews (Feedback button) for that drive, are excellent.
When it comes time to replace that hard drive with another, it will
not last nearly as long in your application.

HGST HDS722516VLAT80 Deskstar 7K250 IDE Ultra ATA100 160GB 7200 RPM

http://www.newegg.com/Product/Product.aspx?Item=N82E16822145061

Your drive was designed some time after the IBM fiasco. IBM sold
their consumer disk division to Hitachi, after the incident described
here, and then they became HGST. And presumably, they learned from
their mistakes. I would say your 7K250 has held up well.

http://en.wikipedia.org/wiki/Deskstar

*******

Some computer cases, have an air intake fan, near the drive
bay area. This fan draws cool air from the room, and blows it
over the drive bay.

http://media.bestofmicro.com/H/I/253350/original/thermaltake_m9_intake.jpg

Mechanically, it's better if the fan is attached to the computer
case, rather than attached to the bay. Done this way, vibrations
from the fan, can interact with the disk drive.

http://media.bestofmicro.com/H/D/253345/original/thermaltake_m9_cage.jpg

I use an externally mounted, 120mm fan, to cool my drives. The
fan is bolted to the front of the computer, using a home-made
aluminum mounting frame. Current operating temperature of my drives
(as reported by HDTune) is 29C and 30C, and that's because it is a
summer's day. I am nowhere near 58C.

HTH,
Paul
 
Paul said:
Very interesting.

Some observations:

1) Your hard drive is too hot!

58C will ruin the hard drive.

According to:

http://www.hgst.com/tech/techlib.nsf/techdocs/B272B6575A7B410886256CE90058095B/$file/D7k250_ps1.pdf

The maximum *operating* temperature (that means the sustained or
constant temperature) for the device is 55 C, so 58 C is too high but
not by a lot. Remember that HDtune is stressing the device. I doubt
the drive is working as hard to occasionally dump a buffer from the
recording application to update or append to an audio file. I'm not
sure the OP simply went into HDtune to go look at the current SMART
values or if he ran their speed tests and then looked at the SMART data.

The OP may only need to include the hard disk in the fan speed control
software so the case fan speeds up when then hard disk gets too hot.
Due to hysteresis (lag in temperature readings), I'd probably set a max
temp of 50 C at which the case fan speeds up and 55 C as the warning
temperature. If the OP's computer/OS setup didn't include thermal
monitoring and fan control, Speedfan might work (but the OP would have
to know which measurements in Speedfan were for which temperature and
fan speed sensors and then rename them so he remembers later which
measurement is for what). The specs for my WDC hard disk state its
*operating* temperature maximum is 60 C, so my thresholds are 55 and 60
(to increase case fan and to alert). If a 5 C leeway isn't enough to
get the air moving in time to keep the drive's temperature below its
maximum operating temperature, it's time to blow out the dust bunnies.

However, I typically do not pile hard disks right next to each other in
the drive cage. I like to leave space between them. Even if the case
fan's speed goes up, there's not enough space between adjacent drives to
move cooling air between them. And making sure any fat/wide cables
don't block airflow is important, too. If you can't move them out of
the way, twist them so they are inline with the airflow, not against it.
3) Current Pending Sectors count = 6 is more interesting.

Only if the count doesn't go down later. This is a pending value.
Sectors that happen to get a read error may not do so later. This
operation is left pending the next write operation to those sectors to
see if they're still bad. Unless you do something to write to the same
suspect sectors, they won't get retested and then determined if bad or
good. Transient errors can make a sector look bad but when later tested
it is good. Some drives include logic to do the retest when the drive
is quiescent (i.e., offline correction), some don't. I haven't really
found good info at the drive makers' site on which do and which don't.
Just because the SMART data lists the Offline Uncorrectable count
doesn't mean the drive has that logic. Some makers include SMART values
that have no value (they're worthless, not that they have a zero value).
Just look at the C2 attribute for Temperature. Uh huh, sure, the drive
is running at 1,441,850 C (or over 2 million degrees Fahrenheit).
4) Power on Hours is 14819. That's roughly 617 days of continuous 24
hour operation. Most of my drives, don't last that long here.

That's not a continuous power-on reading. That's the total number of
hours that the device has been powered on. So the 14819 value could be
over 2 years of continuous use or over 10 years of interrupted use.
This value also may include (depends on the SMART value was implemented
by the maker) the low-power state of the device when its logic remains
powered but the platters aren't spinning.

The Power Cycle count is 123. According to definition, that's the count
of full on/off power cycles for the device. That won't include when the
device is in low-power mode (logic is powered but platters aren't
spinning). So, assuming the power save modes on the computer were NOT
configured to spin down the platters after some specified time if idle
activity addressed to the device, 14819 hours divided by 123 power
cycles is, on average, about 120 hours of constant up-time during each
power cycle, or about 5 days. If power saving settings on the computer
let the device spin down after being idle awhile, the "powered" state of
the device would be even shorter. Since the author says they run the
recording program in a constant loop so it is always recording then
there is probably no spin-down of the drive and we're back to the 5-day
average up-time. Of course, one power cycle might've had the hard disk
up only a few minutes while another power cycle had it for a year.

The other problem with the Power-On Hours count is that it can wildly
inaccurate. For some old drives, the value would wrap around (reset to
zero) or it would overflow (becomes all 1 bits) and start counting
downward or the value would advance in erratic increments.

Getting the operating temperature down below 55 C, or less, is probably
of primary concern to the OP. Then watch the *pending* sector count
(for remapping bad sector AFTER they still tested bad when they are next
rewritten) to see if it keeps going up or gradually drops or stays the
same.
 
VanguardLH said:
The maximum *operating* temperature (that means the sustained or
constant temperature) for the device is 55 C, so 58 C is too high but
not by a lot. Remember that HDtune is stressing the device.

Actually, that's not true. When you click the "Health" tab, none
of the other tests within HDTune can be running. If you start the program
and click no tabs, nothing happens. If you click the Health tab, the
SMART command is given and the statistics read out and displayed
on the screen. SMART is updated at regular intervals, so the
SMART command is issued in a slow polling mode.

If you were quick about it, you could issue the Benchmark command,
stop it half way, then click the Health tab, you might see some
elevated temperature from the Benchmark activity that just
finished. But you cannot run both activities simultaneously.

I think you can get a temperature reading, while you Benchmark (it
shows in the task bar), but the other SMART stats cannot be observed.

You can be running HDTune in parallel with other applications, in
which case you can be monitoring while stress is being applied. But
even then, there are limits. HDTune will not connect to disks which
are too busy. I observed this just yesterday, when testing two
brand new disks. They would not show up as choices in the HDTune
menu, as long as the other tests (with a separate program) were running.
If my other tests were stopped, then I could get HDTune to list
the disks in question.
Only if the count doesn't go down later. This is a pending value.
Sectors that happen to get a read error may not do so later.

The OP is writing continuously to the drive. Which gives no opportunity
for read errors to show up (for Pending to climb) and gives plenty of
opportunities for Pending sectors to be resolved. It's an ideal
situation in terms of making the value go down.

On my failing drive, the Pending count has stayed at zero the whole
time, while the reallocations grow on write. Which is a bit strange.
I can read-verify my failing drive, and the Pending will not go up.
But if I do fresh writes to the drive (write 500GB of video), then
I see fresh reallocations the next time I check SMART. And the whole
thing, has made a mockery of my understanding of how automatic
reallocation works. For my drive to work the way it does, implies
the read head follows behind the write head, and can observe what
has just been written (read-after-write). And how likely is that ?

Paul
 
Paul said:
Actually, that's not true. When you click the "Health" tab, none
of the other tests within HDTune can be running. If you start the program
and click no tabs, nothing happens. If you click the Health tab, the
SMART command is given and the statistics read out and displayed
on the screen. SMART is updated at regular intervals, so the
SMART command is issued in a slow polling mode.

Okay, but that doesn't preclude running the tests and then checking the
Health status. The drive would still be warmed from the tests when you
went to check on the Health. The OP didn't mention running the tests so
hopefully the Health status was for his disk in a rather idle state.

If the OP is getting 58 C without doing the HDtune testing and he didn't
doing anything disk intensive before looking at the Health status then
58 C would be extreme for a hard disk that has been sitting idle for,
say, around 20 minutes, or more.
The OP is writing continuously to the drive. Which gives no opportunity
for read errors to show up (for Pending to climb) and gives plenty of
opportunities for Pending sectors to be resolved. It's an ideal
situation in terms of making the value go down.

But the continuously writing is to different sectors. The software is
appending to existing files or creating new files. After the loop's
end, maybe the OP is copying out the old files to save as an archive or
leaving them there and the next loop creates new audio files instead of
overwriting the old ones.

The OP said "saves are made on another drive." Does that mean copy of
the existing audio files followed by deleting them? Even if so, how
long before the now unallocated sectors for the deleted old audio files
get reused for new audio files?

I haven't had a drive with pending sector remaps so I can't tell from
experience how long before the count should go down other than to say it
is supposed to go down on the next write (and success of the write) to
the suspect sector (well, cluster that contains that sector). If the
drive incorporated the offline correction to test suspect sectors and
assuming the drive went idle for long enough for the disk to be
considered offline then I would think just 6 suspect sectors would get
tested whenever the disk was considered offline (I doubt it takes long
to test them). If you're just waiting until a free cluster has the
suspect sector get rewritten by something that uses it later, how would
you know when it got tested?

If the pending sector count (of how much to remap) never goes down then
the reserve space has been consumed and those suspect sectors will never
get remapped *if* they later test as bad on write. So unless you know
some process is going to use that particular unallocated sector to write
to it and with your drive never going idle to perform the offline
correction testing, how would you know how long to wait to see if the
pending count stayed the same (meaning no more reserve space to remap if
a write finds them still bad) or if it goes down?

If it keeps incrementing up then the drive is getting worse (well,
probably getting worse until whenever they happen to get retested on a
write). You'd think there would be a utility you could use to force a
retest (write) of whatever sectors that SMART considered suspect in its
pending count but maybe there's no external means of finding out which
sectors are being tracked by the SMART history in the logic on the disk.
On my failing drive, the Pending count has stayed at zero the whole
time, while the reallocations grow on write. Which is a bit strange.
I can read-verify my failing drive, and the Pending will not go up.
But if I do fresh writes to the drive (write 500GB of video), then
I see fresh reallocations the next time I check SMART. And the whole
thing, has made a mockery of my understanding of how automatic
reallocation works. For my drive to work the way it does, implies
the read head follows behind the write head, and can observe what
has just been written (read-after-write). And how likely is that ?

Paul

That's the problem with SMART: it isn't that smart. The drive makers
are allowed far too much leeway in how they interpret the SMART values,
what they will use them for, and how they define the updated values.
Like I mentioned, the temperature value (attribute C2) in the OP's SMART
data is showing over 1 million degrees Celsius. I really doubt that.
SMART has never been completely independent of a particular drive
maker's interpretation and implementation. Some do it this way, some do
it that way, some don't update some values, others put in wrong values.

On old drives, like the OP's, SMART was in its infancy and many values
retrieved from it were unreliable. For old drives, I never rely on the
SMART values. I saw way too many health monitor utilities reading the
SMART data and declaring the drives as bad when there was nothing wrong
with them. I'd probably rely more on SpinRite and HDD<something>
(forget its name) to thoroughly test a drive to determine if a suspect
hard disk was usable or not long before I relied on SMART data.

The point of SMART was to provide predictive failure analysis. It's
failed at that intent which seems mostly due to the variable (and often
bad) implementation allowed by each disk maker and how much them
implement over time with varying models from each disk maker. With
SMART, I feel like I'm spinning a Magic 8 ball to predict if the hard
disk is good or bad (have some fun here: http://www.magic8ball.org/).
 
VanguardLH said:
Like I mentioned, the temperature value (attribute C2) in the OP's SMART
data is showing over 1 million degrees Celsius. I really doubt that.

Just for kicks, I figured I'd check.

The "data" value is 1441850 decimal.

Now, convert that to hex, and get 0x0016003A

Ignore the 16 part for a moment, take the 3A part and that's 48+10=58
which gives the 58C on the screen :-)

I don't know the significance of the 0x00160000 offset. It's like
one word too many, was read out (program read 32 bits, disk defined 16 bits?).

Paul
 
Paul said:
Just for kicks, I figured I'd check.

The "data" value is 1441850 decimal.

Now, convert that to hex, and get 0x0016003A

Ignore the 16 part for a moment, take the 3A part and that's 48+10=58
which gives the 58C on the screen :-)

I don't know the significance of the 0x00160000 offset. It's like
one word too many, was read out (program read 32 bits, disk defined 16 bits?).

Paul

There are other SMART values that are 48 bits long but the drive maker
only inserts 32 bits. The result is at some point the SMART value
becomes negative (since the disk only uses 32 bits so the leftmost bits
get truncated with the results of a leading 1). That's just another
reason why I feel S.M.A.R.T. is just too dumb to rely upon.
 
Le 6/29/2012 5:54 PM, VanguardLH a écrit :
From their web site:

Loop Recorder has 64bit support to handle recorded audio data of up to
16 GB.

Loop Recorder Pro can handle even larger amounts of audio data. With a
FAT32-filesystem it can record audio data of up to 256 GB per session
and with NTFS it is unlimited.

So you must be using their Standard version and not their Pro version.
Or have you been indefinitely using their free trial version which does
not include support?

The Pro version has the archive function, not the Standard version (and
very probably not the trial version).

We use Pro Version.
The prog seems just unfinished : on June 30 2012 its marked on the
website "better performance under Windows 98SE and ME "
I take it as a very bad joke.

The prog has some features that are indeed totally out of date, so thats
a sign of something strange ( for a 200 euros program, its not fair).

Even when the app is crashing, the orthagraphic of the message is wrong,
they forget a letter in the word, so you really have the impression its
not serious - in the same time you understand that your recording is
just LOST.

If the progs needs very special monitoring of hard disk for not to fail
everytime, they should say it in BIG LETTERS, the biggest possible
letters. That should even be updated and implemented in the prog.

I cannot believe one second the NASA can trust such an app ...

Just figure out I used it since now almost 10 years or more, with all
possible cares and setting ...

Whats the mystery ?
 
Le 6/29/2012 5:54 PM, VanguardLH a écrit :
From their web site:

Loop Recorder has 64bit support to handle recorded audio data of up to
16 GB.

Loop Recorder Pro can handle even larger amounts of audio data. With a
FAT32-filesystem it can record audio data of up to 256 GB per session
and with NTFS it is unlimited.

So you must be using their Standard version and not their Pro version.
Or have you been indefinitely using their free trial version which does
not include support?

The Pro version has the archive function, not the Standard version (and
very probably not the trial version).

Changing cluster size won't affect a defect in the 'Save' algorithm in
the program to close any files on which it has a handle. From
http://www.looprecorder.de/tut_radio.php, it appears you don't just save
the recording but instead you open an editor and from there you do a
save function. http://www.looprecorder.de/tut_editor.php shows the
editor that should open when you click "Edit and Save". Are you doing
any editing before you click the Save button (disk icon in toolbar)?

Yes, no no, I use the most simple saving process without editing.
Its a simple button "Quick Save" : thats is just saving the recordings
(in recollecting tones of smal wave files used for that)



Are you clicking on the "Edit and Save" button in the Loop Recording
dialog or are you instead bringing up the editor and saving from there
without first stopping the looped recording? Presumably you must first
stop recording before you start editing and then save.

What "3 errors" are you talking about (in the temp folder with the 4000
files)? What was this "analysis" you mention?


3 of this about 4000 files have problems, so the WHOLE save crashes.



Do you really need to generate 4000 files for 15 hours of recording?

Its not me : i am NOT the authour of this program.

But i think its not a bad idea they process this way, cause you have
perhaps less errors in many smal files than in a big wave file.
When saving fail, we can keep the temp et try to do something with it
(and thats painfull work)



From their product description, it looks like you can create some huge-
sized audio files the limit on size depending on which file system you
use for the drive. http://www.looprecorder.de/tut_diskusage.php shows
file sizes up to about 300GB which is obviously a lot larger than your
"small files" comment. How big is your loop?

About 2100 Mo for 15 hours with mp3 320 compression, from the original
temp wav files (the saving process does the conversion)



Are you leaving it at the
small default of 10 minutes? Or did you up it to hours?


I use 15 hours loop segments, sometimes 20.



I don't see
the program is restricted from going up to your 15 hour window but then
I cannot see the parameters you can specify when you click on Change
button for Time Settings in the Loop Recorder dialog (shown at
http://www.looprecorder.de/tut_radio.php). You aren't hitting a maximum
file count limit per folder in the file system but you might be hitting
a maximum object count in the program.

Since you are talking about capturing "live" content then you're talking
about people talking on your show (you don't mention what your show airs
so maybe "live" content is local talent coming into your studio to play
their music).

yes, things like that.



If it's just talking you want to capture, you could lower
the bitrate

Oh my god, no no.
mp3 320 kbps is enough compromise !!!



to reduce the size of the audio file(s). Their diskusage
web page show how much difference there is in size the audio files with
different bitrates. You never mentioned how long is the actual loop
that you are recording but having 4000 audio files sure makes it appear
you chose a small loop size. My guess is that each loop is generating
an audio file. So make the loops longer. If you think the audio files
are then getting too big then reduce the bitrate for capture.

Well, it's possible you're hitting a max file count per folder depending
on what file system you are using on the drive but which you never
mentioned. http://en.wikipedia.org/wiki/Fat32 has a table that shows
the maximum number of files (per folder). If you using the old FAT12
file system, the max is 4068 files per folder. You didn't even mention
which operating system you are using. So what OS and what file
system(s) within it are you using?

Are you running an anti-virus program or anything else that interrogates
the files on your computer? If so, have you tried excluding this
program's "temp" folder?

No no .. were pretty good in perfect OS settings, both hardware /
software. No antivirus at all, nothings comes to disturb the main task.
No auto updates of anything, etc etc ... The context is on solid state.
You tried all their contacts at http://www.looprecorder.de/email.php,
even the postal address (by sending your letter with signature
confirmation to prove they got it), and they didn't reply?

Thet replied me this day !
I send them back my .. sadness and misanderstood.
Wait and see.


Support
should be included in the paid product. You did pay for it, right, and
you're not just indefinitely using their trial product, right?

Erm, lets say we use a "special" trial pro version.
Thats usually the way people do to fully test progs and see if it worth
to buy it. But we also used the official Trial Pro Version, that is
crashing the same way, if you use big time loops (5 hours and more)
Personally I get suspicious of any company proliferating commercialware
that hides behind a private domain registration (their registrar is
listed as the registrant instead of the real registrant)

???


to hide their
identity.

but they give their 2 names, so ??

looprecorder.de's registration shows their registrar (1and1)
as the registrant. Registrants pay extra to hide.


I don't really understand (because tired of thinking in english sometimes)
 
Le 6/29/2012 6:18 PM, Pen a écrit :
You really should be running a better checking program than
chkdsk. I would suggest SpinRite. If there are any disk
problems it will find them.
http://www.grc.com/sr/spinrite.htm

Thank you, i will do it either.
Perhaps all the problems came from this side, if the disk is always in
writing process with those X 000 litle files.

So, if the LOOP is "tiring", "exhausting" the hard drives, they should
ADVERTISE SERIOUSLY the users before they use it. And just loose the data.
 
Le 6/29/2012 7:52 PM, Paul a écrit :
Very interesting.

Some observations:

1) Your hard drive is too hot!

58C will ruin the hard drive.

Fit forced air cooling near the drive.

The best way to do this, is mount a fan capable of drawing cool
room air into the computer case, and have it blow directly
over the surface of the hard drive.

High temperature operation like that, will shorten the lifetime
of the hard drive motor. The lubricant inside the sealed motor,
will be forced out of the motor.

2) Reallocated sectors = 1, is not a bad number.

3) Current Pending Sectors count = 6 is more interesting.
It seems, for whatever reason, your drive may be having more problems
reading the data back later. My failing drive doesn't do that.
The thing is, since you're "loop recording", there should be many
opportunities to reduce the Current Pending Sector count, and
increase the Reallocated Sectors. But your Reallocated Sectors
is not grown significantly.

I do not know how to interpret this information, except to suggest
the high operating temperature is partially responsible. Cool off the
drive.

Ohh, of course. Now its EAST, and this computer is near the ceilling all
the more !!! Fu ... !




4) Power on Hours is 14819. That's roughly 617 days of continuous
24 hour operation. Most of my drives, don't last that long here.

The customer reviews (Feedback button) for that drive, are excellent.
When it comes time to replace that hard drive with another, it will
not last nearly as long in your application.

HGST HDS722516VLAT80 Deskstar 7K250 IDE Ultra ATA100 160GB 7200 RPM

http://www.newegg.com/Product/Product.aspx?Item=N82E16822145061

Your drive was designed some time after the IBM fiasco. IBM sold
their consumer disk division to Hitachi, after the incident described
here, and then they became HGST. And presumably, they learned from
their mistakes. I would say your 7K250 has held up well.

http://en.wikipedia.org/wiki/Deskstar

*******

Some computer cases, have an air intake fan, near the drive
bay area. This fan draws cool air from the room, and blows it
over the drive bay.

I know how to do that, of course. An I will very quick.
Just don't forget I had problems with the loop rec. since years, with
different drives and not in the heart of the heat !

http://media.bestofmicro.com/H/I/253350/original/thermaltake_m9_intake.jpg

Mechanically, it's better if the fan is attached to the computer
case, rather than attached to the bay. Done this way, vibrations
from the fan, can interact with the disk drive.

Yes of course. Our next drive will be out of all vibrations.

http://media.bestofmicro.com/H/D/253345/original/thermaltake_m9_cage.jpg

I use an externally mounted, 120mm fan, to cool my drives. The
fan is bolted to the front of the computer, using a home-made
aluminum mounting frame. Current operating temperature of my drives
(as reported by HDTune) is 29C and 30C, and that's because it is a
summer's day. I am nowhere near 58C.

HTH,
Paul

Many thanks, Mister Paul.
Its good learning all this crucial "details" that are not "details"
 
Le 6/30/2012 2:43 AM, VanguardLH a écrit :
However, I typically do not pile hard disks right next to each other in
the drive cage. I like to leave space between them. Even if the case
fan's speed goes up, there's not enough space between adjacent drives to
move cooling air between them. And making sure any fat/wide cables
don't block airflow is important, too. If you can't move them out of
the way, twist them so they are inline with the airflow, not against it.


Of course. Now i use external cooled sata drive enclosure (no raid used)
in E Sata. With 4 2 TB hdd. (not the computer with loop rec., wich another)
 
Setup.exe said:
Le 6/29/2012 6:18 PM, Pen a écrit :

Thank you, i will do it either.
Perhaps all the problems came from this side, if the disk is always in
writing process with those X 000 litle files.

So, if the LOOP is "tiring", "exhausting" the hard drives, they should
ADVERTISE SERIOUSLY the users before they use it. And just loose the data.

In terms of storage technology, there are devices like this.

http://www.acard.com.tw/english/fb01-product.jsp?idno_no=382&prod_no=ANS-9010BA&type1_idno=5&ino=28

Price is listed as USD 339.

It would also need eight sticks of RAM. This would add greatly to the expense.
Only DDR3 RAM now, is cheap. DDR2 RAM is more expensive, and that is the type
used by this box.

http://dl.acard.com/download/compitibility list/ANS-9010_9010B compatible list.pdf

Such a device, you could write it as much as you want, and it would
not wear out. If such a device still has problems, then it must
be a software problem.

*******

In a hard drive, you could replace your current hard drive, with an RE4.
For example, if the computer has two disk ports, you could run two
of these in RAID 1 mirror mode. One advantage of RE4, is TLER.

http://www.newegg.com/Product/Product.aspx?Item=N82E16822136697

http://en.wikipedia.org/wiki/TLER

"Desktop Computers and TLER Effect

Effectively, TLER and similar features limit the performance of on-drive
error handling, to allow RAID controllers to handle the error if problematic.
In a non-RAID environment, such features are unhelpful, and manufacturers
do not recommend their use.

It is best for TLER to be "Enabled" when in a RAID array to prevent the
recovery time from a disk read or write error from exceeding the RAID
controller's timeout threshold. If a drive times out, the hard disk will
need to be manually re-added to the array, requiring a re-build and
re-synchronization of the hard disk. Enabling TLER seeks to prevent this
by interrupting error correction before timeout, to report failures only
for data segments. The result is increased reliability in a RAID array.

In a stand-alone configuration TLER should be disabled. As the drive is
not redundant, reporting segments as failed will only increase manual
intervention. Without a RAID controller to drop the disk, normal (no TLER)
recovery ability is most stable.

The WDTLER utility allows for the enabling or disabling of the TLER parameter
in the hard disk's firmware settings allowing the user to determine the best
setting for his particular usage as either a stand-alone or RAID drive.
This utility is written for DOS and you will require a DOS bootable disk
with this utility on it to use it."

So that would be a suggestion for a drive to use, either a pair of drives
in RAID 1 mirror, or a single drive (with TLER adjusted as appropriate).

*******

An SSD would be a good solution, especially as the write rate of your loop
program is probably not that high. The lifetime of an SSD, is a function
of the maximum write cycles (3000) times the capacity (40GB). Say, for the
sake of argument, a recording program consumes 2GB per hour of space. Then
the SSD would last 3000*40/2 = 60000 hours or about 6.8 years. Using internal
wear leveling, the wearing of the flash is spread evenly over the address
space of the flash (so you don't "burn a hole in it").

That makes an SSD a viable solution as well.

No matter what solution you choose, keep the temperature down.

*******

Are there other ways to do it ? Yes.

This motherboard is 12" x 8" and has a G34 socket. Motherboard
costs $254 USD. The motherboard was specifically selected, to
fit in an ordinary computer case.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182230

It has room for eight sticks of RAM. Purchase four of these kits
at $60 each, to fill the DIMM slots. This gives 32GB of reliable
storage. (Work out whether this is sufficient for a full day of
recording !) This is unbuffered ECC memory, meaning even if the
memory makes a trivial error, the error can be corrected automatically
at the hardware level.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262

Next, you need a G34 socket processor. This processor costs $270.
The heatsink fan and cooler are extra. I selected this as the
cheapest processor to run the recording machine.

http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266

For a total system cost of $254 + (4 * $60) + $270, you have
32GB of reliable storage. Total cost so far $764 USD.

Add to it, this Windows program for $18.99 USD. Now our system
cost is 764+19 = $783 USD. This converts the 32GB of memory
on the motherboard, into a 31.5GB hard drive (need to leave a little
RAM for the OS). The loop files would be safe in here, as long as
the computer power remains running. The files can be transferred
out of the RAMDisk at your convenience (at the end of the day).

http://memory.dataram.com/products-and-services/software/ramdisk

There are many details to be worked out in such an approach, but
that was intended to give you an alternative solution - a solid
state storage which might be a bit cheaper than the Acard box.

*******

So those are some alternate ways of building storage systems.

My suspicion would be, the handling of I/O by the looprecorder
program, needs work... And even with the *best* hardware solution,
the program may cause problems.

Paul
 
Paul said:
In terms of storage technology, there are devices like this.

http://www.acard.com.tw/english/fb01-product.jsp?idno_no=382&prod_no=ANS-9010BA&type1_idno=5&ino=28

Price is listed as USD 339.

It would also need eight sticks of RAM. This would add greatly to the expense.
Only DDR3 RAM now, is cheap. DDR2 RAM is more expensive, and that is the type
used by this box.

Just be sure to replace the lithium battery at 3-year intervals, or
shorter, to ensure the RAM drive doesn't lose its contents (before the
OP gets around to saving the audio files to somewhere else).
 
Back
Top