IDE or AHCI ?

  • Thread starter Thread starter Lynn McGuire
  • Start date Start date
Yousuf Khan wrote
Rod Speed wrote
Unfortunately there is nothing from benchmarks that are relevant to
real-world apps, therefore there is nothing in them that I care about.

Thats the opposite of what I was talking about. You should take whatever
it is that you care about the speed of, and USE THAT AS THE BENCHMARK.

Take whatever real world work you care about the
speed of AND USE THAT AS THE BENCHMARK.
I really don't care what you believe. I know what I have seen and what I've measured.

And the only example you have actually been able to list where
that actually happens for long enough to matter is with the boot,
and there are lots of ways to avoid that happening enough to matter.
It also happens after a standby or hibernate resume,

Like hell it does with 10 different processes competing for drive access.
not just during full boot. It's a little less intense with hibernate,

It doesnt happen at all with a return from hibernate, JUST
ONE process is using the drive to load the content of ram
from the hibernate file and its just from one file too, so
there isnt even access to multiple files going on either.
and even less with standby,

None at all with suspend to ram in fact.
but it's still there.

That is just plain wrong.
Besides, we're not here to take your advice on when or when not to boot our systems, we know when it needs to reboot,

You clearly dont if you choose to do a full shutdown and
reboot when you arent using the system and dont like the
competition for drive access you get in a full reboot.
and there are good reasons to do it.

Nope, not one if you dont like the competition for disk access by various processes.
BTW, apparently Windows 8 will have a super-fast boot which will reload the kernel and drivers from something similar
to a
mini-hibernate file, which should result in 10 second reboots or less.

A hibernate doesnt take that long right now. A suspend in spades.
They will give you the option to do a full reload just in case there are changes needed.
Most of us would say you're being silly not updating regularly.

Thats not the same thing as AS OFTEN AS YOU CAN.
You do have the option of ignoring the updates as long you're in the middle of important work, so I have my Windows
update set to just notify me but not to automatically apply the updates, but eventually you should update. Windows
security holes abound, and they're usually bad.

That last is a bare faced pig ignorant lie.

And thats an entirely separate matter to the other point that there
is no reason you cant do the update WHEN YOU ARENT USING
THE SYSTEM, SO YOU DONT CARE ABOUT HOW LONG THE
REBOOT THAT IS ASSOCIATED WITH THAT TAKES ANYWAY.
"Modern fast seeking hard drives" are the biggest burdens on modern PCs there is.

Wrong again, the user is.
If you take a look at the Windows 7 Experience Index, which rates the speed of components from 1 to 7.9, slow to fast
respectively; yes it's a benchmark like any of the others and
arbitrary in its measurements, but it is a common benchmark for everyone. It bases the overall experience number on
the slowest component number in the system.

Its just some fool's 'index' of nothing meaningful at all.
In modern systems, that's invariably the hard drive system.

Pity that in the real world, with most disk activity actually being media
files, where the file is accessed linearly, and the speed of access is
entirely determined by the media play speed, the drive has no effect
whatever on the speed at which the media is played.
It doesn't matter whether you have the latest top-line CPU, or hottest new GPU which are running close to the
theoretical top 7.9 number, every system these days will be stuck
with a 5.9 rating if they use a hard drive to boot up from.

The boot time is completely irrelevant for anyone with even half
a clue, because they arrange for that boot to happen when they
arent using the system.
All modern hard drives are now stuck at the 5.9 rating level, therefore all of the fastest HD-based systems are stuck
at the 5.9 rating level. In fact, that speed rating is the same whether you
have a hard drive that's less than a year old, or if you have one
from 5 years back; if you go back to 10 years ago, the speed rating
might go down to an insignificantly smaller 5.7 vs. 5.9. There are big
improvements in capacity year after year, but not in speed.

PIty its the boot speed thats completely irrelevant to anyone with
even half a clue because anyone with even half a clue arranges
for their system to boot when they arent using it.
That is unless you go with an SSD as your boot device. SSD's seem to
be the only significant new speed-up technology on the storage front.

Wrong again. Suspend and hibernate are.
In the CPU and GPU realm, there have been great leaps and bounds made in speed, but in storage it's been pretty static
for years at a time.

More mindless silly drivel.
 
Linux is just as bad these days, at least desktop versions, like Ubuntu.
There's a constant barrage of updates, most don't require a reboot, but
whenever there's a new kernel update, that does require a reboot. And
there seems to be a kernel update every two weeks nowadays. It's damn
near impossible to keep Linux running constantly for more than a week
now without ignoring updates.

Yousuf Khan

My experience has been different. I use MythTV running on Linux, and I
keep that machine running for months at a time. But it's nearly a
dedicated machine (it also acts as a DHCP server, but that's not much of
a load), and not used for Internet access.

It all depends what you're trying to accomplish which platform(s) will suit.
 
Yousuf Khan wrote


Thats the opposite of what I was talking about. You should take whatever
it is that you care about the speed of, and USE THAT AS THE BENCHMARK.

Take whatever real world work you care about the
speed of AND USE THAT AS THE BENCHMARK.

Well, in the real world, what I care about most is multitasking, when
several apps would be hitting the disk(s) at the same time. When they
decide to hit the disk(s) is unpredictable. There are various apps
running automatically in the background that hit the disk, from virus
scanners, to streaming media services, to disk backups, bittorrents,
etc., plus whatever app I'm using at the time in the foreground. I've
had them all hit simultaneously at times. This cannot be adequately
modelled by any single benchmark. If I were to run several instances of
the benchmark at the same time, then it might work.
And the only example you have actually been able to list where
that actually happens for long enough to matter is with the boot,
and there are lots of ways to avoid that happening enough to matter.

It's the only example that's easy to describe, every other instance this
happens is random processes running simultaneously at random times.
Like hell it does with 10 different processes competing for drive access.


It doesnt happen at all with a return from hibernate, JUST
ONE process is using the drive to load the content of ram
from the hibernate file and its just from one file too, so
there isnt even access to multiple files going on either.


None at all with suspend to ram in fact.

After they come back from standby or hibernate, the OS still does a
rescan of the hardware to see that everything is still there, you'll see
large plateaus in the disk activity graphs during this time that can
last upto 10 seconds. My system is unusually huge, more a server than a
desktop really: 6 internal hard drives, 2 internal optical drives, and
various external USB & eSATA hard drives. It's probably not a system
that should be running a desktop Windows really, more likely it should
be running a Windows Server. In fact, that's probably the reason why
running Linux on this system seems to be so much smoother on it.
That is just plain wrong.

It's only wrong because your little world view doesn't allow for it.
What's beyond your horizon doesn't exist for you.
Wrong again, the user is.

Then why is the user waiting for things to get done when the hard drives
get busy?
Its just some fool's 'index' of nothing meaningful at all.


Pity that in the real world, with most disk activity actually being media
files, where the file is accessed linearly, and the speed of access is
entirely determined by the media play speed, the drive has no effect
whatever on the speed at which the media is played.


The boot time is completely irrelevant for anyone with even half
a clue, because they arrange for that boot to happen when they
arent using the system.

The WEI disk benchmark is always based on the system disk, i.e. the boot
disk. The boot times are irrelevant here, it's just measuring the raw
performance of the boot disk, but not during boot. You could conceivably
have an SSD as a secondary data disk, while still booting from an HD,
and your SSD's speed will be ignored completely and the times will be
based completely on the HD's speed, because that's what you boot from.

It actually makes sense to use this disk as the benchmark target, as
most or all of the Windows system files are located on the boot disk.
Also, the applications are most often located on this drive too. So it's
quite likely to be the most accessed drive in the system.

Yousuf Khan
 
Yousuf Khan wrote
Rod Speed wrote
Well, in the real world, what I care about most is multitasking, when several apps would be hitting the disk(s) at the
same time.

Then you should be using that config as the benchmark if you actually
use that config much of the time so the speed of that matters.
When they decide to hit the disk(s) is unpredictable.

Not really.
There are various apps running automatically in the background that hit the disk, from virus scanners, to streaming
media services, to disk backups, bittorrents, etc.,

Yes, but with those that significantly affect the speed of
the work the user is doing, when they do that is mostly
configurable, most obviously with backups and virus scans.

Streaming media services and bit torrents dont significantly
affect the speed of what the user is actually doing.

I dont even bother to have a separate PVR anymore,
and that can be recording anything up to 10 broadcast
TV channels simulataneously, and playing one of the
recorded programs, on one of the 5400 RPM eco drives,
while doing bittorrents and other downloads as well,
without having any noticable effect on what the user is doing.
plus whatever app I'm using at the time in the foreground. I've had them all hit simultaneously at times.

Then you havent configured your system properly with the virus
scans and backups.
This cannot be adequately modelled by any single benchmark.

I'M NOT TALKING ABOUT ANY BENCHMARK MODELLING ANYTHING.

I AM TALKING ABOUT USING THE CONFIG YOU CARE ABOUT
THE SPEED DOING REAL LIVE WORK YOU DO ALL THE TIME
AS THE TEST OF THE SPEED OF THE CONFIG.
If I were to run several instances of the benchmark at the same time, then it might work.

See just above.

You should be using the combination of stuff like bitorrents and
media streaming that can happen at the same time as the user
task as the test of the speed of the config, not any benchmark at all.
It's the only example that's easy to describe, every other instance this happens is random processes running
simultaneously at random times.

There is nothing random about when backups and virus scans happen.

And those can be configured to have minimal impact on what the user is
doing if you are actually silly enough to have scheduled them to happen at
the time when you dont like the impact of them on the work the user is doing.
After they come back from standby or hibernate, the OS still does a rescan of the hardware to see that everything is
still there,

And that does NOT involve multiple processes competing for drive access.
you'll see large plateaus in the disk activity graphs during this time that can last upto 10 seconds.

Like hell you do. In spades when coming out of suspend,
you in fact see no disk activity what so ever.
My system is unusually huge, more a server than a desktop really:

Irrelevant to that silly claim of yours about multiple processes
competing for drive access when coming out of hibernate or suspend.
6 internal hard drives, 2 internal optical drives, and various external USB & eSATA hard drives.

Thats nothing unusual and irrelevant to what is being discussed,
your silly claim about multiple apps competing for drive activity
when coming out of hibernate or suspend instead of a full boot.
It's probably not a system that should be running a desktop Windows really, more likely it should be running a Windows
Server.

Depends on how the drives are used.
In fact, that's probably the reason why running Linux on this system seems to be so much smoother on it.

Fraid not.
It's only wrong because your little world view doesn't allow for it.

Wrong with your stupid claim about what happens with coming out of suspend and drive activity.

Anyone can see for themselves that there is NO drive activity as a result of that.
What's beyond your horizon doesn't exist for you.

Usual puerile attempt at insults.
Then why is the user waiting for things to get done when the hard drives get busy?

Irrelevant to your silly BIGGEST BURDON claim.
The WEI disk benchmark is always based on the system disk, i.e. the boot disk. The boot times are irrelevant here,
it's just measuring
the raw performance of the boot disk, but not during boot.

It isnt 'measuring' a damned thing.
You could conceivably have an SSD as a secondary data disk, while still booting from an HD, and your SSD's speed will
be ignored completely

So it isnt actually 'measuring' a damned thing.
and the times will be based completely on the HD's speed, because that's what you boot from.

So it isnt actually 'measuring' a damned thing.
It actually makes sense to use this disk as the benchmark target,

Like hell it does.
as most or all of the Windows system files are located on the boot disk.

Pity they dont get access much at all once the boot is complete if you have plenty of ram.
Also, the applications are most often located on this drive too.

Where the apps are located is also irrelevant when the machine is configured
properly so the apps that are used much are started at boot time.
So it's quite likely to be the most accessed drive in the system.

Wrong again. The most accessed drive is where the data
files are stored for all except virus scans and full backups.
 
David Brown said:
On 1/4/2012 3:41 PM, Rod Speed wrote:
Yousuf Khan wrote
David Brown wrote
[...]
I monitor the disk subsection of the Resource Monitor regularly,
and very often when the disk is busy the Disk Queue
Length is over 1.00 (meaning more than 1 process is actively
waiting on the disk) and the Active Time is pegged near
100%.

Doesnt mean that NCQ doesnt help in that situaiton.

When I'm talking about the disk queue being higher than 1.00, I don't
mean just something minor like 1.01, or 1.10, but I'm talking about
5.00, or even 10.00! There could be 10 process waiting on the disk queue
at any given time. This normally happens during boot-up time, but it
doesn't take very long for the disk queue to kick up to the stratosphere
at any time. Just a few apps trying to access the same disk at the same
time, and you got major delays.

Yousuf Khan

Seems tsomething was done here in 3.2 and moire maybe done in
the near future. Although from an article on LWN, its seems
the curent FS people have trouble understanding some of the
proposals made.

Arno

I'm sorry, I didn't understand a word you said, what are you talking about?

Yousuf Khan
It made some sense to me - maybe this key will help:
Translation of the abbreviations:
"3.2" = Linux kernel version 3.2
"LWM" = Linux Weekly News (a website)
"FS" = Filesystem

All correct.
And the typos:
"tsomething" = something
"moire" = more
"its" = it
"curent" = current

Note to self: Don't post when drunk ;-)
I couldn't figure out which article Arno was referring to - maybe we
could have a link?

Sure: http://lwn.net/Articles/456904/

Arno
 
David Brown said:
On 06/01/2012 4:48 AM, Arno wrote:
On 1/4/2012 3:41 PM, Rod Speed wrote:
Yousuf Khan wrote
David Brown wrote
[...]
I monitor the disk subsection of the Resource Monitor regularly,
and very often when the disk is busy the Disk Queue
Length is over 1.00 (meaning more than 1 process is actively
waiting on the disk) and the Active Time is pegged near
100%.

Doesnt mean that NCQ doesnt help in that situaiton.

When I'm talking about the disk queue being higher than 1.00, I don't
mean just something minor like 1.01, or 1.10, but I'm talking about
5.00, or even 10.00! There could be 10 process waiting on the disk queue
at any given time. This normally happens during boot-up time, but it
doesn't take very long for the disk queue to kick up to the stratosphere
at any time. Just a few apps trying to access the same disk at the same
time, and you got major delays.

Yousuf Khan

Seems tsomething was done here in 3.2 and moire maybe done in
the near future. Although from an article on LWN, its seems
the curent FS people have trouble understanding some of the
proposals made.

Arno

I'm sorry, I didn't understand a word you said, what are you talking about?

Yousuf Khan
It made some sense to me - maybe this key will help:
Translation of the abbreviations:
"3.2" = Linux kernel version 3.2
"LWM" = Linux Weekly News (a website)
"FS" = Filesystem

All correct.

Ah! Okay, I understand it now. BTW, my measurements were referring to
Windows 7 here, not Linux. I actually don't have much of an issue with
Linux using the same hardware, everything runs very fast. I just wish I
could do more of my work in Linux, but I'm somewhat dependent on Windows
apps in many cases. My Linux usage mainly consists of light-duty web
surfing, which is fine, but I can't leave it in Linux most of the time
simply for web surfing alone; in Windows, I can get the web surfing done
and other work too.

Oh interesting, so some cutting edge algorithms are being proposed for
the Linux kernel to avoid overloading the disk queues. So has this
Fengguang Wu's rejigged i/o patchset been implemented into Linux kernel
3.2 full time now?

Yousuf Khan
 
Back
Top