Why Pentium?

  • Thread starter Thread starter Talal Itani
  • Start date Start date
Noozer said:
The CPU fan is the ONLY device in my Media PC that makes any audible noise
at all.

Then your PC is exceptional. In many PCs, a CPU fan failure is
potentially a serious problem precisely because the CPU fan is
inaudible over the other fan(s) and disk drive(s).
 
The said:
Only up to a certain limit. Once you reach a certain image size,
processing power WILL make a difference because there simply is that
many pixels to process.

Yes, but with current processors, it has to be very large indeed.
It does not even have to be humongous, in
Photoshop a 300 DPI A3 size poster that a student might do for a
school event can chew up 6 minutes with just one filter like radial
blur, although most other filters on a similar image can take between
15 second to 100 seconds usually.

That's exactly it: most filters don't take that long. There are some
Photoshop filters that will take minutes even on very small images
with very fast processors, but overall, the built-in Photoshop tools
as well as most of the standard and plug-in filters will work quite
quickly if the image fits in memory. Processor power is not critical.

It's common knowledge that the best way to speed up Photoshop is to
add memory.
Previews are definitely crawling on images of this size and very
often Windows will think Photoshop has stopped responding. Definitely
noticeable.

That is not true. I routinely handle images larger than A3 (A3 at 300
dpi is 17 megapixels or 3543 x 4960 pixels), and I see no delays. A
typical filter requires a few seconds. Unsharp mask is around 1.2
seconds, motion blur is around 2 seconds.

I deal with even larger images on a regular basis (about five times
the size of A3), and I still don't see any big delays for anything.

This is with a 3.0 GHz Pentium and 2 GB of memory.
No, it's not due to insufficient memory (PS reports 100% memory
efficiency), I have 1.5GB on the machine I do my image editing on
simply because of the need to handle these sizes without hitting the
disk.

You're still hitting the disk; Photoshop always hits the disk because
it's not really optimized for modern memory systems. However, if you
go to 2 GB, you'll see a clear improvement. The reason is that the
rest of the system is using a lot of memory, so even with 1.5 GB, you
don't necessarily get lots of memory for Photoshop. Also, Photoshop
has its own memory parameters that you can change to get it to use
more memory (it won't use all memory available, no matter what you
do).

If you've already fiddled wit Photoshop memory parameters and you are
getting Windows messages about things not responding, you've set the
Photoshop parameters way too high. You cannot get PS to use all
available memory, no matter what you do. That's largely a PS defect.
 
Rod said:
Wrong, most obviously when the disk IO is almost entirely linear.

The disk I/O is rarely to consecutive sectors or blocks.
Most obviously with video editing today.

Video editing may refer to consecutive logical blocks, but there's no
guarantee that they will appear that way on the disk.
So much for your claim that access times havent improved
much over 30 or 40 years. That is clearly just plain wrong.

You've forgotten seek time. Access time = latency + seek time. While
actuators are faster than they used to be, the speed hasn't increased
that much.

Here are some numbers I looked up:

On a Seagate Barracuda 7200.9, a recent drive which I picked at
random, average seek time is just over 11 ms, an average latency is
4.16 ms, which implies an average access time of 15.16 ms overall.
The capacity is 400,000 MB.

Thirty-two years ago, the IBM 3340 ("Winchester") had an access time
of 25 ms. The capacity was 70 MB.

So over a period over over three decades, access times have gone from
25 ms to 15 ms. The improvements in disk drives have been in capacity
and transfer rate, not access time. And today access time has become
by far the greatest source of delay in desktop computer systems. Even
if processors were infinitely fast and transfer rates were infinite as
well, you'd still be waiting about the same amount of time to do
things on your computer thanks to the disk drives.
Pity there isnt much improvement in access times with that drive
pair. The improvement is almost entirely in thruput, not access times.

That has been true throughout the history of disk drives. Access
times have hardly changed at all.
Irrelevant to your claim that access times havent improved
much over 30 or 40 years. That is clearly just plain wrong.

I just gave you the figures, which show an improvement of 64% in
thirty-two years. The performance has not yet even doubled.
Depends entirely on what you are doing. Its hardly ever a bottleneck
now, just when doing video editing now. In spades with transcoding
where the disk delay is completely irrelevant.

Almost everything on PCs today is held back by disk delay.
 
In comp.sys.ibm.pc.hardware.storage Mxsmanic said:
Rod Speed writes: [...]
Almost everything on PCs today is held back by disk delay.

That depends on the application and OS and hos smart they are
implemented. It also depends on how much RAM is available.
For example write delays when the disk is not saturated
and a reasonable amounth of RAM is avaliable for buffering can
be completely eleminated.

In order to minimise read-delays, the application designers
have to understand the OS, the hardware characteristics and
thier application well. Many do not.

Arno
 
Arno said:
That depends on the application and OS and hos smart they are
implemented.

Virtually all applications are affected. The OS design is constant
and minimizing disk I/O is not a primary design goal.
It also depends on how much RAM is available.
For example write delays when the disk is not saturated
and a reasonable amounth of RAM is avaliable for buffering can
be completely eleminated.

The size of disks so dramatically exceeds that of any in-memory cache
that disk delays can only be slightly diminished, and only in certain
circumstances.
In order to minimise read-delays, the application designers
have to understand the OS, the hardware characteristics and
thier application well. Many do not.

They have to reduce or eliminate disk I/O. That's the only real way
to get around the problem.
 
Mxsmanic said:
Rod Speed writes
The disk I/O is rarely to consecutive sectors or blocks.

Depends on what is being done, most
obviously with video editing and imaging.
Video editing may refer to consecutive logical blocks, but there's
no guarantee that they will appear that way on the disk.

Sure, but the absolute vast bulk of sectors are.
You've forgotten seek time.
Nope.

Access time = latency + seek time.
Duh.

While actuators are faster than they used to
be, the speed hasn't increased that much.

Fact remains, your completley silly claim that ACCESS TIMES
havent increased that much in 30-40 years is just plain silly.
Here are some numbers I looked up:
On a Seagate Barracuda 7200.9, a recent drive which I picked at
random, average seek time is just over 11 ms, an average latency is
4.16 ms, which implies an average access time of 15.16 ms overall.
The capacity is 400,000 MB.
Thirty-two years ago, the IBM 3340 ("Winchester") had
an access time of 25 ms. The capacity was 70 MB.
So over a period over over three decades,
access times have gone from 25 ms to 15 ms.

So its clearly silly to claim that isnt an improvement.
The improvements in disk drives have been
in capacity and transfer rate, not access time.

And access times arent necessarily what matters,
most obviously with boot time and video editing.
And today access time has become by far the greatest
source of delay in desktop computer systems.

Wrong again, its the user thats the main thing that consumes the
time, as should be obvious from looking at the HD light alone.
Even if processors were infinitely fast and transfer rates were
infinite as well, you'd still be waiting about the same amount
of time to do things on your computer thanks to the disk drives.

Processors are completely irrelevant to your stupid claim that
hard drives havent improved much in 30-40 years speed wise.
That has been true throughout the history of disk
drives. Access times have hardly changed at all.

Halving is a substantial improvement.
I just gave you the figures, which show an improvement of 64% in
thirty-two years. The performance has not yet even doubled.

Still a substantial improvement, stupid.

And the reality is that few systems spend much
time waiting for the access time delay ANYWAY.
Almost everything on PCs today is held back by disk delay.

Separate matter entirely to where the BOTTLENECK IS.

Its the user, not the hard drive, stupid.

And you can see that form the HD led on virtually all personal desktop
systems except when they are doing stuff like imaging and video editing.
 
Mxsmanic said:
Arno Wagner writes
Virtually all applications are affected.

Nope, because most IO happens in the background
and the user is waiting for it to happen.
The OS design is constant

No it isnt. Most obviously with file caching.
and minimizing disk I/O is not a primary design goal.

Maximising the speed of disk IO certainly is.
The size of disks so dramatically exceeds that of any
in-memory cache that disk delays can only be slightly
diminished, and only in certain circumstances.

Mindlessly silly. It isnt the size of the hard drive that matters
ITS THE SIZE OF WHAT IS BEING PROCESSED AT THAT TIME.
They have to reduce or eliminate disk I/O.

Nope, just do it in the background by anticipating
what will be read and by write caching.
That's the only real way to get around the problem.

Wrong, as always. The reason XP takes a substantial amount
of time and disk activity when it boots its because it loads what
is likely to be used at boot time so its in ram when its needed.
 
In comp.sys.ibm.pc.hardware.storage Mxsmanic said:
Arno Wagner writes:
Virtually all applications are affected. The OS design is constant
and minimizing disk I/O is not a primary design goal.

I would advise you to have a look into modern OS design. Reducing any
kind of latency is a primary design goal in general purpose OS design.
The size of disks so dramatically exceeds that of any in-memory cache
that disk delays can only be slightly diminished, and only in certain
circumstances.

Disk size is completely irrelevant. It only matters what is to be
written.
They have to reduce or eliminate disk I/O. That's the only real way
to get around the problem.

I am sorry, but you do not unterstand the issue. It if far, far more
complicated than that.

Arno
 
kony said:
It is truely amazing that so many people don't even know the
basics of setting up a reliable system. Fan tech is not new
and it is truely ludicrous that some resist learning how to
do it right and instead just continue arguing. Let us know
if you eventually tire of having to replace fans and feel
guilty about causing system downtime.

So you are world famous now for being able to pick a truly great fan!??
Impressive! ;---o
 
Rod said:
Wrong, as always. The reason XP takes a substantial amount
of time and disk activity when it boots its because it loads what
is likely to be used at boot time so its in ram when its needed.

I wasn't talking about booting. One never really needs to boot an XP
system, so the time it requires isn't very important.
 
Arno said:
I would advise you to have a look into modern OS design.

No need; I used to write them.
Reducing any kind of latency is a primary design goal in
general purpose OS design.

It's a difficult goal and I don't believe Windows has done much along
those lines. In any case, even in an ideal system, the potential for
improvement is small, especially with hardware that hides disk
structure from the software.
Disk size is completely irrelevant. It only matters what is to be
written.

The larger the disk, the more space that tends to be allocated, and
the larger that space will be in comparison to any file cache.
I am sorry, but you do not unterstand the issue. It if far, far more
complicated than that.

I used to measure and optimize computer systems for a living, so I do
know something about the subject.
 
Rod said:
Depends on what is being done, most
obviously with video editing and imaging.

It depends mainly on disk fragmentation.
Sure, but the absolute vast bulk of sectors are.

See above.
So its clearly silly to claim that isnt an improvement.

It's an insignificant improvement. Sixty percent improvement in disk
performance versus an improvement of five hundred thousand percent for
processors is pretty much the same as no improvement.
And access times arent necessarily what matters,
most obviously with boot time and video editing.

Access time for disks has been a problem for as long as disks have
existed, and the problem has gotten steadily worse over time, since
disks are not keeping pace with processors and memory.
Wrong again, its the user thats the main thing that consumes the
time, as should be obvious from looking at the HD light alone.

The user is not counted in delay time.
Processors are completely irrelevant to your stupid claim that
hard drives havent improved much in 30-40 years speed wise.

They are relevant in that they have improved by orders of magnitude
more than disk drives, making the gap between disk performance and
processor performance extraordinarily large.
Halving is a substantial improvement.

Not compared to dividing by five thousand.
Still a substantial improvement, stupid.

Not compared to a 5000x improvement.
And the reality is that few systems spend much
time waiting for the access time delay ANYWAY.

Most systems spend a great deal of time waiting for disk.
And you can see that form the HD led on virtually all personal desktop
systems except when they are doing stuff like imaging and video editing.

The HD LED is uncommon these days, and often not even present or
connected.
 
Mxsmanic said:
Arno Wagner writes
No need; I used to write them.

Not modern ones you didnt.
It's a difficult goal and I don't believe
Windows has done much along those lines.

You're wrong, as always.
In any case, even in an ideal system,
the potential for improvement is small,

Wrong, as always. In spades with write caching.
especially with hardware that hides disk structure from the software.

You dont need to know that with write caching.
The larger the disk, the more space that tends to be allocated,

Irrelevant. What matters is what IS BEING WRITTEN CURRENTLY.
and the larger that space will be in comparison to any file cache.

Disk size is completely irrelevant. It only matters what is to be written.
I used to measure and optimize computer systems for
a living, so I do know something about the subject.

You clearly dont, you cant even manage to work
out what needs to be cached with write caching.
 
Mxsmanic said:
Rod Speed writes
It depends mainly on disk fragmentation.

Drives never get that fragmented.
See above.

See above.
It's an insignificant improvement.
Nope.

Sixty percent improvement in disk performance

Its been FAR more than that with PC hard drives.
versus an improvement of five hundred thousand percent
for processors is pretty much the same as no improvement.

Pity what was being discussed was where the
bottleneck is with personal desktop systems.

It isnt necessarily in the drive subsystem with some
ops commonly done on personal desktop systems.

Most obviously with transcoding of video files.
Access time for disks has been a problem for as long as disks
have existed, and the problem has gotten steadily worse over time,

Wrong again, most obviously with editing video files. Access times
are completely irrelevant, its thruput that matters much more.
since disks are not keeping pace with processors and memory.

The relativitys with processors and memory are completely
irrelevant when the hard drive isnt a bottleneck.
The user is not counted in delay time.

Wrong, as always. The hard drive access time clearly is
producing **** all delay when the HD light is hardly ever on.
They are relevant in that they have improved by orders of
magnitude more than disk drives, making the gap between disk
performance and processor performance extraordinarily large.

Still irrelevant when the HD light is hardly ever on.
Most systems spend a great deal of time waiting for disk.

Wrong, as always.
The HD LED is uncommon these days,

Pig ignorant drivel.
and often not even present or connected.

Irrelevant to what it shows when it is connected.
 
kony said:
... because some people who shouldn't be building systems,
do. Same people tend to make other mistakes as well, and in
the end AMD and Intel took a step that guards them against
some forms of incompetence but the system itself still
suffered the downtime.

You'd be willing to bet a $100-$500 component on $1 - $20 (de-luxe
model ;) component's reliability? That just doesn't make any sense nor
arguing in that case's favour makes any sense.

What good does it possibly achieve to be able to recognize the most
reliable brand of fan, when they still using ball bearings and not,
say, a magnet to levitate the cooling part and using current to rotate
it (magnet is already used in the electric motor, why not also use it
to keep the fan centered in the right space? =) That way *physical*
tearing would be reduced a notch.

Even in that case I bet the fan would eventually be the *weakest* link.
If the weakest link is that fragile, I'd make damn sure the system
tolerates the failure. Especially when doing so is practically free!

I'm dumbfounded but not speechless (apparently..)
 
Rod said:
Not modern ones you didnt.

They were quite modern. Not that any real progress has been made in
operating systems over the past few decades, though.
You dont need to know that with write caching.

You need to know it to optimize I/O, whether it is reading or writing.
 
Back
Top