Help setting up a Kick Ass Machine with 1 Terrabyte of Storage P4 or G5

  • Thread starter Thread starter L.S
  • Start date Start date
L

L.S

Hi i was wondering if anyone could give me some advice on putting a
machine together that has 1 Terabyte of Storage inside the computer
system but i don't want to make it a server or Raid it up just in case
a drive goes down. My budget will be about £1500-£2000 for a Intel
Machine, but up to £3000 for a G5 which is my other option and which
im currently using G4s for shoots and production.

What i do..? I work for a Digital Studio, where we use Phase One H25
Digital backs and shoot 20-100 gigabytes, so i need a lot of storage
space. At the moment we have 10 Lacie and EzQuest 100 to 250gigs which
i will be using as a 1st copy for jobs being shot on location and will
then transfer to a main computer for the 2nd back and for editing.

What im looking for? Processing Power of RAW 30meg files into 65megs
using Capture One 3.1 software and Photoshop.

I have been told having more ram helps so about 1 to 2 gigs of DDR ram
would be good and Dual process help as well so i don't know if you can
put 2 Pentium 4's together on a Motherboard or i will have to use a
Xeon or AMD CPU does anyone know if the AMD 64 process are worth
looking into.

What Mother motherboard and chipset would people recommend?

Is it possible to place 4 or 6 Serial ATA HD inside a PC Tower and not
have heat problems and would a 400w case supply enough power


My other option is to pay a premium to get a G5 1.8ghz and some how
place 4 internal HD which I have been told can be done by a thing
called a G5 Jam which places 2 extra HD in the case. Which worries me
about heat.

Any Advice

(e-mail address removed)
 
I have been told having more ram helps so about 1 to 2 gigs of DDR ram
would be good and Dual process help as well so i don't know if you can
put 2 Pentium 4's together on a Motherboard or i will have to use a
Xeon or AMD CPU does anyone know if the AMD 64 process are worth
looking into.



Your best bet is to use a dual Xeon solution in this instance. The AMD
Opterons have been a bit slow and problematic while showing reliability
issues as of late.

What Mother motherboard and chipset would people recommend?



I would highly recommend the Supermicro X5DA8 with onboard U320 SCSI. If
you insist on not using SCSI you can use the X5DAE, which is the same board
without SCSI.

http://www.supermicro.com/PRODUCT/MotherBoards/E7505/X5DA8.htm

Is it possible to place 4 or 6 Serial ATA HD inside a PC Tower and not
have heat problems and would a 400w case supply enough power

For what you are doing, I would have to say that SCSI is your only solution.
You can play with the ATA drives and waste too much time tweaking just to
get them to function in the mode you desire.



As for heat, this isn't a problem if you select a well-designed case.

My other option is to pay a premium to get a G5 1.8ghz and some how
place 4 internal HD which I have been told can be done by a thing
called a G5 Jam which places 2 extra HD in the case. Which worries me
about heat.

No matter if you go with a G5 or dual Xeon you are going to pay a premium.
Best advice is spend the money the first time around and you will save in
the long run since you won't have to upgrade when you find out that going on
the cheap didn't buy you the tools you need. Good luck.



Rita
 
L.S said:
Hi i was wondering if anyone could give me some advice on putting a
machine together that has 1 Terabyte of Storage inside the computer
system but i don't want to make it a server or Raid it up just in case
a drive goes down. My budget will be about £1500-£2000 for a Intel
Machine, but up to £3000 for a G5 which is my other option and which
im currently using G4s for shoots and production.

What i do..? I work for a Digital Studio, where we use Phase One H25
Digital backs and shoot 20-100 gigabytes, so i need a lot of storage
space. At the moment we have 10 Lacie and EzQuest 100 to 250gigs which
i will be using as a 1st copy for jobs being shot on location and will
then transfer to a main computer for the 2nd back and for editing.

What im looking for? Processing Power of RAW 30meg files into 65megs
using Capture One 3.1 software and Photoshop.

I have been told having more ram helps so about 1 to 2 gigs of DDR ram
would be good and Dual process help as well so i don't know if you can
put 2 Pentium 4's together on a Motherboard or i will have to use a
Xeon or AMD CPU does anyone know if the AMD 64 process are worth
looking into.

What Mother motherboard and chipset would people recommend?

Is it possible to place 4 or 6 Serial ATA HD inside a PC Tower and not
have heat problems and would a 400w case supply enough power


My other option is to pay a premium to get a G5 1.8ghz and some how
place 4 internal HD which I have been told can be done by a thing
called a G5 Jam which places 2 extra HD in the case. Which worries me
about heat.

Ignore Rita's comments about how SCSI is the only solution. You know the
old saying "if all you have is a hammer everything looks like a nail"?
Well she's like that with SCSI--there appears to be no conceivable
circumstance including behing held at gunpoint under which she would
recommend anything but SCSI.

You're more likely to get a useful answer to your question about CPU and
memory for Photoshop to the photography related
groups--comp.sys.ibm.pc.hardware.video is targetted more at display
adapters and monitors than at video production or photography.

As for drives, four 250-300 GB SATAs should be fine for what you're
describing--there are numerous cases that will hold four or more drives and
provide adequate cooling--many of the Lian-Li and GMC models have a
provision for dual fans blowing on the drive bays, and in one of my
machines that has four drives in such a GMC case I'm seeing drive
temperatures around 30C with a couple of low-speed Panaflo fans. Basically
a non-problem.

Make sure you've got good power--an oversized power supply does no harm to
anything but the electric bill (note to lurkers--the reason for this is
that power supplies generally give the highest conversion efficiency at or
near their design condition) but an undersized power supply causes a
multiplicity of problems. For a production machine I'd also check the
voltages under load with a good meter, not just the motherboard sensors
which are not all that accurate.

Remember that drives _do_ fail and make appropriate provisions.
 
J. Clarke said:
Ignore Rita's comments about how SCSI is the only solution. You know the
old saying "if all you have is a hammer everything looks like a nail"?
Well she's like that with SCSI--there appears to be no conceivable
circumstance including behing held at gunpoint under which she would
recommend anything but SCSI.

Of course he should ignore my comments about SCSI if his goal is to build a
novelty machine for puttering around the house. After reading his
requirements, I know he would be very disapointed with a toy when he needs a
tool to do professional video rendering. Unfortunately, Mr. Clark's limited
knowledge is limited to current trends that include overclocking and other
passing novelties. This is why he is primarily ignored or just taken at
lees than face value.
You're more likely to get a useful answer to your question about CPU and
memory for Photoshop to the photography related
groups--comp.sys.ibm.pc.hardware.video is targetted more at display
adapters and monitors than at video production or photography.

As for drives, four 250-300 GB SATAs should be fine for what you're
describing--there are numerous cases that will hold four or more drives and
provide adequate cooling--many of the Lian-Li and GMC models have a
provision for dual fans blowing on the drive bays, and in one of my
machines that has four drives in such a GMC case I'm seeing drive
temperatures around 30C with a couple of low-speed Panaflo fans. Basically
a non-problem.

As for the SATA drives, you would be hard pressed to find any of them in
systems that the video professional uses. The cases are a good
recomendation, though.
Make sure you've got good power--an oversized power supply does no harm to
anything but the electric bill (note to lurkers--the reason for this is
that power supplies generally give the highest conversion efficiency at or
near their design condition) but an undersized power supply causes a
multiplicity of problems. For a production machine I'd also check the
voltages under load with a good meter, not just the motherboard sensors
which are not all that accurate.

Yes, a quality power supply is always needed.
Remember that drives _do_ fail and make appropriate provisions.

Yes they do, and unfortunately you are recommending something that will cost
him more in time and lost revenue when he could have spent a few extra
dollars upfront and did it right the first time.
 
L.S said:
Hi i was wondering if anyone could give me some advice on putting a
machine together that has 1 Terabyte of Storage inside the computer
system but i don't want to make it a server or Raid it up just in case
a drive goes down. My budget will be about £1500-£2000 for a Intel
Machine, but up to £3000 for a G5 which is my other option and which
im currently using G4s for shoots and production.

What i do..? I work for a Digital Studio, where we use Phase One H25
Digital backs and shoot 20-100 gigabytes, so i need a lot of storage
space. At the moment we have 10 Lacie and EzQuest 100 to 250gigs which
i will be using as a 1st copy for jobs being shot on location and will
then transfer to a main computer for the 2nd back and for editing.

What im looking for? Processing Power of RAW 30meg files into 65megs
using Capture One 3.1 software and Photoshop.

I have been told having more ram helps so about 1 to 2 gigs of DDR ram
would be good and Dual process help as well so i don't know if you can
put 2 Pentium 4's together on a Motherboard or i will have to use a
Xeon or AMD CPU does anyone know if the AMD 64 process are worth
looking into.

What Mother motherboard and chipset would people recommend?

Is it possible to place 4 or 6 Serial ATA HD inside a PC Tower and not
have heat problems and would a 400w case supply enough power


My other option is to pay a premium to get a G5 1.8ghz and some how
place 4 internal HD which I have been told can be done by a thing
called a G5 Jam which places 2 extra HD in the case. Which worries me
about heat.

Any Advice

(e-mail address removed)
Check out The Screensavers show on techtv.com. They did this about 6 mos
ago and they usually have a good archive of what and how.
 
Could be usefull, could also be a waste of money.

How big are the files your are editing? And how much memory do the
applications use.
If the files are 200MB, and the program uses 800MB at maximum, then
buying 2GB instead of 1GB doesn't help you at all.

You already work with the software, so you should find about that
first.

CAN help. If the CPU is the bottleneck. If the first CPU is already
not at 100% usage for large amounts of time, the second CPU will just
be idle.

Try to find a review that does exactly what you want to do.

Unfortunately Photoshop can be used to prove anything you want. You
can show that a PIII 1Ghz beats a P4 2Ghz. You can use it to show that
a P4 beats an Athlon, and that same Athlon beats that same P4.
similar for showing Apple beats PC and vice versa.

It all depends on what part of photoshop you are using.
Both Intel and AMD have strong processors that you will be happy with.
The difference is probably only measurable in benchmarks anyway.
Your best bet is to use a dual Xeon solution in this instance. The AMD
Opterons have been a bit slow and problematic while showing reliability
issues as of late.

Care to explain about that? I haven't seen reliability or performance
issues concerning the Opterons.
I would highly recommend the Supermicro X5DA8 with onboard U320 SCSI. If
you insist on not using SCSI you can use the X5DAE, which is the same board
without SCSI.

http://www.supermicro.com/PRODUCT/MotherBoards/E7505/X5DA8.htm



For what you are doing, I would have to say that SCSI is your only solution.

Why?

It seems he mainly needs LOTS and lots of storage. But the highest
performance doesn't seem necessary because the bottleneck is most
likely the CPU and Memory.

If that is the case, then SATA is a far better choice.

If the harddisks are the bottleneck after all. (hardly likely with
photoshop, but it could be with that other program, that I don't have
experience with) then he could look into SATA/PATA with RAID, or SCSI.
Or even SCSI with RAID is he needs still more disk performance.

But as I said, I don't think disk performance is a real issue, so SCSI
is not his only solution IMO
You can play with the ATA drives and waste too much time tweaking just to
get them to function in the mode you desire.

As for heat, this isn't a problem if you select a well-designed case.

I have 5 harddisks in a miditower and that isn't any problem.
(FK320)

There are lots of cases nowadays that you can use.

One thing that is important is a good PSU.
I would recommend Antec. And they also have good cases which aren't
too expensive.
For instance take a look at:
http://www.antec-inc.com/us/pro_details_enclosure.php?ProdID=93700#
With two 12cm fans you can be sure that heat is not an issue.
No matter if you go with a G5 or dual Xeon you are going to pay a premium.
Best advice is spend the money the first time around and you will save in
the long run since you won't have to upgrade when you find out that going on
the cheap didn't buy you the tools you need. Good luck.

True.
But even better is determining what you should spend the money on.
First determine the bottlenecks of the application, and then decide
what to buy.

BTW Which such an expensive system with so much storage I don't
understand why he doesn't want Raid.
After all, Raid cards aren't all that expensive anymore. Certainly not
when you compare the price to the rest of the system.

Marc
 
Your best bet is to use a dual Xeon solution in this instance. The AMD
Care to explain about that? I haven't seen reliability or performance
issues concerning the Opterons.



Google is your friend, use it.

solution.

Why?

It seems he mainly needs LOTS and lots of storage. But the highest
performance doesn't seem necessary because the bottleneck is most
likely the CPU and Memory.



This is where I disagree with you. I have done tests using dual and quad
Xeon systems with a RAID array vs. single drive (JBOD) and I found that in
99% of cases the I/O is the bottleneck and not the CPU/memory.

If that is the case, then SATA is a far better choice.

If all he wanted to do is store files for later retrieval he could do the
same with an old outdated DDS3 tape drive.

If the harddisks are the bottleneck after all. (hardly likely with
photoshop, but it could be with that other program, that I don't have
experience with) then he could look into SATA/PATA with RAID, or SCSI.
Or even SCSI with RAID is he needs still more disk performance.



Yes, I agree that he would do really well with a SCSI RAID solution. That
is why I recommended the Supermicro X5DA8 MB with its expansion slot to
cheaply upgrade it to a RAID solution.


But as I said, I don't think disk performance is a real issue, so SCSI
is not his only solution IMO



Unfortunately, you may want to believe that you can solve all problems by
throwing more CPU/memory on the fire, but I have found the biggest
bottleneck in most, if not all, systems is the I/O.

True.
But even better is determining what you should spend the money on.
First determine the bottlenecks of the application, and then decide
what to buy.



I can agree with this. This is why some of your more prominent video
editing companies such as AVID us SCSI RAID solutions in their equipment.
Hell, you can still get top dollar for old AVID equipment.

BTW Which such an expensive system with so much storage I don't
understand why he doesn't want Raid.
After all, Raid cards aren't all that expensive anymore. Certainly not
when you compare the price to the rest of the system.

Agree, this is where he needs to go. He needs a good quality SCSI array
that will give him the performance and reliability he needs for this type of
task. Granted, he might be able to squeak by with a SATA RAID solution, but
at what cost for a novelty item in a professional environment.





Rita
 
Google is your friend, use it.

Google didn't show any reliability or performance issues compared to
Xeon.
Or Maybe you expect me to read every one of the 8000 hits on "Xeon
reliability performance issues" and the 2000 hits on "Opteron
reliability performance issues" to find out what issues you might be
talking about?

The numerous hardware sites, and newsgroup I frequently visit haven't
given any indication of that either.

If there really is an issue then IMO it is quite unlikely that I
missed it. And even more unlikely that IBM and SUN both missed it too.

If you really feel there is an issue with Opteron, then it isn't too
much to ask for a url so that I can read about it?
This is where I disagree with you. I have done tests using dual and quad
Xeon systems with a RAID array vs. single drive (JBOD) and I found that in
99% of cases the I/O is the bottleneck and not the CPU/memory.

And those systems were used for photoshop?

Quad Xeon systems sounds like server usage instead of photoshop and
such. Those systems have different demands, and then I/O is indeed
often the bottleneck.

But in my experience with photoshop and video editing programs disk
I/O has never been the bottleneck.

(When you are working in photoshop you are just using memory and CPU.
You just use the harddisk to save the endresult. Hardly something for
which you need a super fast and super expensive solution).

So unless we first establish if I/O is really a bottleneck for his
applications, we cannot decide if he should choose SCSI over SATA or
Raid over single disks.

If if turns out that disk I/O is the bottleneck for his applications,
then I'll be the first to agree that he should use SCSI, or SCSI RAID

Marc
 
If if turns out that disk I/O is the bottleneck for his applications,
then I'll be the first to agree that he should use SCSI, or SCSI RAID

Yes, good advice. I think that he will just have to determine where the
bottleneck is in the first place. I just hope he doesn't have to buy the
same tools three times to find out that the I/O was the culprit all the
time. Oh well, his money.

Rita
 
Yes, good advice. I think that he will just have to determine where the
bottleneck is in the first place. I just hope he doesn't have to buy the
same tools three times to find out that the I/O was the culprit all the
time. Oh well, his money.

The OP was already working with the software on G4 machines. So he
should at least be able to determine the bottleneck on those machines.

With a bit of testing on those machines the risk of buying the wrong
equipment should be quite small.

If he has a memory bottleneck now, he should of course buy more
memory. That can mean that his new botteleneck in the new machine
(with that extra memory) becomes disk I/O. The trick is finding out if
that is the case.
If he tests the current systems with smaller files, so that he doesn't
run into the low memory limit of his current system, he should be able
to determine what the new bottleneck of his system will be when he has
enough memory.

If he already has a disk I/O bottleneck the situation becomes really
simple. buy the fastest disk and don't worry about testing what will
become the new bottleneck, because it will most likely still be the
disk I/O, even with SCSI Raid :-)

Marc
 
The OP was already working with the software on G4 machines. So he
should at least be able to determine the bottleneck on those machines.



Agreed. He should be able to monitor and determine what resources are being
maxed out without adding more hardware. Then again, he may not get an
accurate indication if his application is paging back to the harddrive and
eating resources. Catch 22.

With a bit of testing on those machines the risk of buying the wrong
equipment should be quite small.



Depending on his situation, the time and money can add up very quickly to
the point that buying an expensive solution outright in the first place
would have saved him considerable money by preventing lost revenue.

If he has a memory bottleneck now, he should of course buy more
memory. That can mean that his new botteleneck in the new machine
(with that extra memory) becomes disk I/O. The trick is finding out if
that is the case.



I don't recall how much memory he presently has?

If he tests the current systems with smaller files, so that he doesn't
run into the low memory limit of his current system, he should be able
to determine what the new bottleneck of his system will be when he has
enough memory.



Depending on how much the file size is decreased he may not be able to make
an accurate determination. It's best to use the file size that he normally
uses for his everyday applications. This is why most benchmark utilities
lack accuracy and give a false sense of performance.

If he already has a disk I/O bottleneck the situation becomes really
simple. buy the fastest disk and don't worry about testing what will
become the new bottleneck, because it will most likely still be the
disk I/O, even with SCSI Raid :-)

Even the fasted drive in a single drive application is going to be
considerably slower compared to a RAID configuration. He'd be very hard
pressed to saturate the PCI bus in any modern system with a single drive.
The performance gains alone will be considerable with the RAID configuration
to offset any intermittent bus saturations. And if he reached the
saturation point of a PCI-X slot, provided his future system has one, for
his controller he is far ahead of the game.



Again, the disk I/O plays a major factor in overall performance in CPU/Mem
intensive applications since you still have to get the data to the CPU/Mem
in the first place. And when he gets to the point where he optimized both
CPU/Mem and disk I/O on his present machine he will have reached the
physical limitations of his present platform, he may decide he needs to
upgrade to the latest and greatest and start over :) The old carpenter's
adage of "measure twice and cut once" applies to this situation.



Rita
 
Agreed. He should be able to monitor and determine what resources are being
maxed out without adding more hardware. Then again, he may not get an
accurate indication if his application is paging back to the harddrive and
eating resources. Catch 22.

If his application is paging, then it's clear that he doesn't have
enough memory.

The cach22 then comes when you try to determine what will be the next
bottleneck once you fix the memory bottleneck.
That can indeed be difficult.
Depending on his situation, the time and money can add up very quickly to
the point that buying an expensive solution outright in the first place
would have saved him considerable money by preventing lost revenue.

Well, that's exactly what I'm trying to tell.

With a bit of testing on his *current* machines, he can buy the best
new solution in the first place.
I don't recall how much memory he presently has?

He didn't say.
Depending on how much the file size is decreased he may not be able to make
an accurate determination. It's best to use the file size that he normally
uses for his everyday applications. This is why most benchmark utilities
lack accuracy and give a false sense of performance.

He'll have to decrease the filesize just as long so that performance
monitoring doesn't show memory as bottleneck anymore.

The next bottelneck is not garanteed to be the same bottleneck as you
would see when he has the extra memory, but there is a very large
chance that it will be.
It's certainly better than not testing this at all.
Even the fasted drive in a single drive application is going to be
considerably slower compared to a RAID configuration. He'd be very hard
pressed to saturate the PCI bus in any modern system with a single drive.
The performance gains alone will be considerable with the RAID configuration
to offset any intermittent bus saturations. And if he reached the
saturation point of a PCI-X slot, provided his future system has one, for
his controller he is far ahead of the game.



Again, the disk I/O plays a major factor in overall performance in CPU/Mem
intensive applications since you still have to get the data to the CPU/Mem
in the first place.

Reading a 60MB file from a single harddisk only takes about a second.

The editing takes far longer. From several minutes to several hours.
If you have a CPU/mem bottelneck you loos MUCH more then 1 second in
the editing process.

If you then save the result again you again need 1 second.

With upgrading to SCSI or SCSI and RAID he can decrease that 1 second
to 1/4 second. That saves him 1/2 second with every file he edits.
Not really spectacular, unless you edit thousands of those files every
hour.
Then consider that he probably edits a few files an hours and when the
cpu/mem bottleneck probably means he looses minutes on every file.
And when he gets to the point where he optimized both
CPU/Mem and disk I/O on his present machine he will have reached the
physical limitations of his present platform, he may decide he needs to
upgrade to the latest and greatest and start over :) The old carpenter's
adage of "measure twice and cut once" applies to this situation.

In his situation it's probably easy to fix the mem bottleneck. But
most likely the CPU will always stay the bottleneck.

Marc
 
Reading a 60MB file from a single harddisk only takes about a second.
The editing takes far longer. From several minutes to several hours.
If you have a CPU/mem bottelneck you loos MUCH more then 1 second in
the editing process.

If you then save the result again you again need 1 second.

With upgrading to SCSI or SCSI and RAID he can decrease that 1 second
to 1/4 second. That saves him 1/2 second with every file he edits.
Not really spectacular, unless you edit thousands of those files every
hour.
Then consider that he probably edits a few files an hours and when the
cpu/mem bottleneck probably means he looses minutes on every file.

You would be correct with your assumption if this were the case. His
application not only reads the file at the beginning and writes at the end.
You have to factor in the incremental writes that being passed to the
harddrive between the two points. In a perfect world with an optimal CPU
and loaded to the max with memory he is still going to have considerable
disk access. There is no way around this.

In his situation it's probably easy to fix the mem bottleneck. But
most likely the CPU will always stay the bottleneck.



Didn't you say that the OP didn't mention the amount of memory? If this is
the case you are just guessing. Even in a worse case scenario with an
inadequate amount of memory, he would still benefit from a RAID solution
since he will notice increased performance while paging. But guessing on
memory size and what file size he is using isn't going to give any useful
information that would be helpful.





Rita
 
It appears you fixed your newsreader, Rita. Good job! 8)


No I didn't. If it were really broken I am completely disappointed that it
seems to be working for you now. Maybe the problem was on your end all this
time? Since you were the only one with issues, I would guess that it's your
problem. Anyways, I'm elated that I made your day.



Rita
 
You would be correct with your assumption if this were the case. His
application not only reads the file at the beginning and writes at the end.
You have to factor in the incremental writes that being passed to the
harddrive between the two points. In a perfect world with an optimal CPU
and loaded to the max with memory he is still going to have considerable
disk access. There is no way around this.

Why??

I have 1GB ram. When I edit 60MB files, there is no need to page
memory to disk. There is no need for incremental writes to the
harddisk either. Everything is done in memory. The disk is only
accessed when I save the endresult.
Lots of applications work this way. (it's the most logical and easiest
way to program an application)

But when you write an application and you fear it will quickly run out
of memory, or want to keep the memory footprint very small for some
reason, you might want to use incremental writes.
3D renderers often do that. For those particular applications, the CPU
is the bottleneck, and you can do incremental writes without slowing
the application down much.
Didn't you say that the OP didn't mention the amount of memory?

He didn't. He did say he edits files of 60MB and we know he has less
then 1GB ram.
If this is
the case you are just guessing. Even in a worse case scenario with an
inadequate amount of memory, he would still benefit from a RAID solution
since he will notice increased performance while paging.

Sure he will benefit a little. But it is an ineffective and expensive
solution when you don't have enough memory.
There is only 1 good solution when you don't have enough memory, and
that is to add memory. Everything else is throwing money away.
But guessing on
memory size and what file size he is using isn't going to give any useful
information that would be helpful.

Discussing the different possibilities gives him some ideas what to
look for. That is why I did not focus solely on the option of a memory
bottleneck, but also on the chances for a cpu bottleneck of disk i/o
bottleneck.

You only look at the remote possibility that he has an disk I/O
bottleneck. But that is also guesswork. Why not look at the whole
picture?

Marc
 
You would be correct with your assumption if this were the case. His



Because this is the way the program was written and designed to do.

I have 1GB ram. When I edit 60MB files, there is no need to page
memory to disk. There is no need for incremental writes to the
harddisk either. Everything is done in memory. The disk is only
accessed when I save the endresult.
Lots of applications work this way. (it's the most logical and easiest
way to program an application)

And it's not paging due to lack of memory, it's incrementally writing to the
HD with the sole intention of storing it.

But when you write an application and you fear it will quickly run out
of memory, or want to keep the memory footprint very small for some
reason, you might want to use incremental writes.
3D renderers often do that. For those particular applications, the CPU
is the bottleneck, and you can do incremental writes without slowing
the application down much.


He didn't. He did say he edits files of 60MB and we know he has less
then 1GB ram.



768MB, 512MB, or 256MB it doesn't matter since you are only guessing. And
he can certainly edit a 60MB file with 256MB without paging if he has
minimal applications running in the background. Then again, he may have
128MB or 64MB?

Sure he will benefit a little. But it is an ineffective and expensive
solution when you don't have enough memory.
There is only 1 good solution when you don't have enough memory, and
that is to add memory. Everything else is throwing money away.



And just guessing since this isn't the problem.

Discussing the different possibilities gives him some ideas what to
look for. That is why I did not focus solely on the option of a memory
bottleneck, but also on the chances for a cpu bottleneck of disk i/o
bottleneck.



Blindless guessing and assuming a scenario that may not even exist is far
more detrimental than getting the facts the first time and intentionally
giving wrong information.

You only look at the remote possibility that he has an disk I/O
bottleneck. But that is also guesswork. Why not look at the whole
picture?

The whole picture was taken into consideration and from experience I have
found, in most cases, the bottleneck to be disk I/O as the main culprit.
Major manufacturers of video editing and rendering equipment have
traditionally used a high-end RAID configuration in their packages for this
sole reason.



Rita
 
Rita said:
Because this is the way the program was written and designed to do.

So now you're an authority on the internals of Adobe Photoshop?
And it's not paging due to lack of memory, it's incrementally writing to
the HD with the sole intention of storing it.

Why would Adobe Photoshop be doing that?
768MB, 512MB, or 256MB it doesn't matter since you are only guessing. And
he can certainly edit a 60MB file with 256MB without paging if he has
minimal applications running in the background.

That depends on the footprint of the OS and application he is using.
Then again, he may have
128MB or 64MB?

The original poster stated specifically that he was considering a machine
with 1-2 gig of RAM. No "guessing" involved--that's what he said he was
going to be using.
And just guessing since this isn't the problem.

I see. So how much disk access _does_ Photoshop perform while editing a 60
MB file on a machine with 1-2 GB of RAM?
Blindless guessing and assuming a scenario that may not even exist is far
more detrimental than getting the facts the first time and intentionally
giving wrong information.

The scenario was clearly stated.
The whole picture was taken into consideration and from experience I have
found, in most cases, the bottleneck to be disk I/O as the main culprit.
Major manufacturers of video editing and rendering equipment have
traditionally used a high-end RAID configuration in their packages for
this sole reason.

Adobe Photoshop is not a video editing or rendering application and so I
fail to see what relevance the needs of commercial video production houses
have to the question at hand.

And how did you sneak out of my killfile? Back, BACK I say.
 
Because this is the way the program was written and designed to do.

Well, that is just your word against mine.

I know from experience that it does not do that. Whatever you claim
otherwise in a ng doesn't change my real-world experience with the
software.
And it's not paging due to lack of memory, it's incrementally writing to the
HD with the sole intention of storing it.

Again your word against mine.
SOME programs does this, MOST do not.
768MB, 512MB, or 256MB it doesn't matter since you are only guessing. And
he can certainly edit a 60MB file with 256MB without paging if he has
minimal applications running in the background. Then again, he may have
128MB or 64MB?

Your guess is as good as mine.

But since it is quesswork I have never claimed that that MUST be his
botteleneck. I have also never claimed that I know what would be the
best solution for his problem, because it is impossible to determine
that without knowing his bottlenecks.

Instead I have given several likely scenarios and told him to
determine which scenario fits in his case, and what that means for the
hardware he needs to buy.

What is so terrible about that?
And just guessing since this isn't the problem.

And how would you know that?

You are also guessing. But you claim that better disk I/O will solve
every problem.
Blindless guessing and assuming a scenario that may not even exist is far
more detrimental than getting the facts the first time and intentionally
giving wrong information.

Come on Rita, be realistic.

Knowing nothing about the current bottlenecks and then claiming that
scsi will solve all his problems, THAT is giving intentionally wrong
information.

I have given him several possible scenarios, so that he can determine
which scenario applies to him, and make the best hardware choice
without having to come back to this ng for every single step in making
that choice.

It suprises me that you don't see that.
The whole picture was taken into consideration and from experience I have
found, in most cases, the bottleneck to be disk I/O as the main culprit.

I have found from experience that in most such cases disk I/O is not
the main culprit.

Photoshop relies on memory, memory and more memory. (and cpu)
Major manufacturers of video editing and rendering equipment have
traditionally used a high-end RAID configuration in their packages for this
sole reason.

It depends on the situation. He is not just doing videoediting, but
also photoediting. Those applications have completely different
demands.

Most videoediting is done streaming, which means you don't need much
memory at all. But in the vast majority of cases that means that the
CPU is the bottleneck.

Only when you do some really simple editing which doesn't require much
cpu will the disk I/O become the bottleneck.

That might be exactly what the OP is doing, in which case he needs to
buy a SCSI array. But neither you nor I know that.
He might just have a CPU bottleneck in which case it would be wasted
money to buy SCSI.

I really don't understand why you don't want to acknowledge that.
Please explain it to me.

Marc
 
Back
Top