IDE or AHCI ?

  • Thread starter Thread starter Lynn McGuire
  • Start date Start date
Krypsis wrote
Rod Speed wrote
I turn my computers off when not in use. No point using electricity when I'm not using the computer.

Even if you do, you can set it to hibernate or suspend when that
happens so you dont get a full boot when you turn it on again,
and so wont see that disk activity when you turn it on again.
I turn the power off at the UPS but not at the wall socket.

Why waste the power that the UPS uses ?
Waiting for a bootup is no great pain.

Its even less of a pain if you set it to hibernate or suspend.
I walk past my computer, hit a few buttons, do a few other things and by the time I have finished that, the beast is
up and ready.

He was talking about obsessing about the disk activity.
I've yet to see Windows last months without a complete shutdown.

More fool you.
Friends of mine are forced to reboot often because Windows gets itself tied up in knots.

They dont have a clue about how to use it.
Linux, on the other hand, can go for years without even suspending or hibernating.

But usually doesnt for various reasons.
You don't know what you're talking about.

Everyone can see for themselves who doesnt know what they are talking about, child.
I have modern fast seeking drives in all my computers bar my Powermac and they ALL bog down when accessed by multiple
programs at the same time.

How odd that mine dont. I dont even bother to have a separate PVR anymore
and I bet you couldnt even work out when its recording in a proper double blind
trial with not being allowed to use the task manager to see whats running etc.
I suggest you do a few simple experiments to prove this to yourself.

Been doing that since before you were even born thanks child.
Do you reckon the seek limitations of mechanical hard drives might be the reason SSDs are so popular in applications
where speed is paramount?

Taint the SEEK speed thats the reason for that, child.
 
Indeed.

FWIW, I've been using Solaris for some time to serve files, and have
routinely had uptimes of about a year before something has come along to
force a shutdown -- usually me wanting to change the hardware
configuration or location. I've just switched to OpenIndiana -- hope I
do as well with it.
 
The easiest uptime records to find on the net are for Novell Netware
machines running for over six years:

I read an article once about an IBM mainframe that had been running
non-stop for a couple of decades - but the only original part was the
frame. Everything else had been hot-swapped over time, usually for
preventive maintenance rather than as a result of failure.

Indeed. Linux already is in the lower-quality sector. Still
good for many tasks, but not high-reliability or high-uptime.
Just fullfilling the minimal sane requirements for a server OS.

That shows that Windows is properly placed in the "toy" class here.
It never ceases to amaze me that people are willing to settle for
that.

Arno
 
Krypsis wrote


Even if you do, you can set it to hibernate or suspend when that
happens so you dont get a full boot when you turn it on again,
and so wont see that disk activity when you turn it on again.

If it is hibernated, a memory image is stored on the hard disk. Ergo,
there needs to be disk activity to restore said image to RAM.

In suspend mode, the computer goes into a low power mode but does not
save data. A power outage whilst in this state will result in data loss.
From Vista onwards, suspend mode will become hibernate on laptops after
3 hours of inactivity (default time).

From my experience, hibernate and suspend, on Windows, is not reliable.
Why waste the power that the UPS uses ?

Because my modem and router also run from it and others in my household
use them wirelessly.
Its even less of a pain if you set it to hibernate or suspend.

How much of a pain is it to press one (1) button and enter one (1)
password???
He was talking about obsessing about the disk activity.

I was talking about turning off versus hibernate/suspend.
More fool you.

You need to get out more.
They dont have a clue about how to use it.


But usually doesnt for various reasons.

Yes, people like me turn them off for various reasons, ie. to save on
power when not in use. Windows, on the other hand, can't go the distance
without a regular reboot.
Everyone can see for themselves who doesnt know what they are talking about, child.

True, most people here seem to have worked out that you're a moron.
How odd that mine dont. I dont even bother to have a separate PVR anymore
and I bet you couldnt even work out when its recording in a proper double blind
trial with not being allowed to use the task manager to see whats running etc.

As if I care! I don't even bother with a PVR.
Been doing that since before you were even born thanks child.

At 74 years of age, it's a fair guess that I was born a rather long time
before you. Given the childish nature of your arguments, it's fairly
obvious who is the child here.
Taint the SEEK speed thats the reason for that, child.
It is ONE of the reasons but not the only one.
 
David Brown said:
The easiest uptime records to find on the net are for Novell Netware
machines running for over six years:

Not difficult for them to achieve, not being internet-facing and thus
not having to have security updates applied every ten minutes.

I installed several Netware networks years ago running on Token Ring.
This was in the days when Ethernet was still installed using thin coax
cable.

I remember a story - possibly urban myth - about a company which called
out a consultancy to fix a Netware server which had gone down. The firm
had ground to a halt as no-one could get any work done. The problem was
no-one knew where it was; it had been up and running so long. They
eventually found it behind a walled-in space which had been created
during building modifications.
 
Not difficult for them to achieve, not being internet-facing and thus
not having to have security updates applied every ten minutes.

I installed several Netware networks years ago running on Token Ring.
This was in the days when Ethernet was still installed using thin coax
cable.

I remember IBM token ring networks back in the 80s... Yech!!
 
Krypsis wrote
Rod Speed wrote
If it is hibernated, a memory image is stored on the hard disk.

Yes, but you dont get lots of different processes all attempting
drive access simultanously, so you dont get the problem he was
clearly talking about. You just have ONE process restoring the
ram contents from the ONE file on the hard drive, and reading
that linearly too.
Ergo, there needs to be disk activity to restore said image to RAM.

But NOT a number of different processes competing for access to the drive.
In suspend mode, the computer goes into a low power mode but does not save data. A power outage whilst in this state
will result in data loss.

Not with a laptop.
From Vista onwards, suspend mode will become hibernate on laptops after 3 hours of inactivity (default time).

No reason why you have to accept the default setting.
From my experience, hibernate and suspend, on Windows, is not reliable.

You're wrong, as always.

The most you have to do is an occassional full reboot every month or few
depending on how the machine is used, as Win cant go forever without a full reboot.
Because my modem and router also run from it and others in my household use them wirelessly.
How much of a pain is it to press one (1) button and enter one (1) password???

There is a much longer wait till its performing at full speed again.
I was talking about turning off versus hibernate/suspend.

And ignoring his complaint about seeing lots of processes competing for
drive access with a full reboot which you dont bet with hibernate and suspend.
You need to get out more.

Nope, I use systems like that all the time thanks child.
Yes, people like me turn them off for various reasons, ie. to save on power when not in use.

Thats just one way of saving power.
Windows, on the other hand, can't go the distance without a regular reboot.

And that is MUCH less often than every time you stop using the PC.
True, most people here seem to have worked out that you're a moron.

Everyone can see for themselves that you are lying, as always.
As if I care! I don't even bother with a PVR.

You have always been, and always will be, completely and utterly irrelevant.

What you may or may not claim to care about in spades.
At 74 years of age, it's a fair guess that I was born a rather long time before you.

Guess which pathetic little prat has just got egg all over its pathetic little face, yet again ?
Given the childish nature of your arguments, it's fairly obvious who is the child here.

Guess which pathetic little prat has just got egg all over its pathetic little face, yet again ?
It is ONE of the reasons

Wrong, as always.
but not the only one.

It aint even one of them, child.
 
Krypsis wrote [...]
True, most people here seem to have worked out that you're a moron.

Everyone can see for themselves that you are lying, as always.

Nope, he's right. Everyone here *does* know you're a moron, and you prove
it each and every time you post.

What you need is a team of mental health professionals to keep you away
from your computer long enough to stop embarrassing yourself on usenet.
 
Krypsis said:
I remember IBM token ring networks back in the 80s... Yech!!

To be fair, it was a very well thought-out network protocol which beat
the pants off CSMA/CD Ethernet.

The main problem was the prohibitive cost of network cards (often
containing a more powerful processor than the host system) and the MAUs
(concentrators). They suffered from IBM's propensity to over-engineer
everything.

Even though IBM developed it to run at 16Mbps (originally 4) and to use
two tokens instead of one, the advent of cheap Ethernet cards and cheap
UTP cabling eventually killed off Token Ring.
 
Yousuf Khan wrote


You can however use what you care about the speed of as the benchmark.

Unfortunately there is nothing from benchmarks that are relevant to
real-world apps, therefore there is nothing in them that I care about.
I just dont believe that that happens all that much for long to matter.

I really don't care what you believe. I know what I have seen and what
I've measured.
Like I have said to you before, anyone with even half a clue
boots so rarely that that situation is completely irrelevant. If
you care about the speed of your system, the only thing that
makes any sense at all is to only boot very rarely, weeks or
months apart, and suspend or hibernate, not shutdown.

It also happens after a standby or hibernate resume, not just during
full boot. It's a little less intense with hibernate, and even less with
standby, but it's still there.

Besides, we're not here to take your advice on when or when not to boot
our systems, we know when it needs to reboot, and there are good reasons
to do it.

BTW, apparently Windows 8 will have a super-fast boot which will reload
the kernel and drivers from something similar to a mini-hibernate file,
which should result in 10 second reboots or less. They will give you the
option to do a full reload just in case there are changes needed.
Even if you are silly enough to religiously update as often as you can,
any reboot involved should happen when you arent using the system.

Most of us would say you're being silly not updating regularly. You do
have the option of ignoring the updates as long you're in the middle of
important work, so I have my Windows update set to just notify me but
not to automatically apply the updates, but eventually you should
update. Windows security holes abound, and they're usually bad.
Thats just plain wrong with numbers like that.


Thats just plain wrong with modern hard drives. Very
minor delays in fact with modern fast seeking drives.

"Modern fast seeking hard drives" are the biggest burdens on modern PCs
there is. If you take a look at the Windows 7 Experience Index, which
rates the speed of components from 1 to 7.9, slow to fast respectively;
yes it's a benchmark like any of the others and arbitrary in its
measurements, but it is a common benchmark for everyone. It bases the
overall experience number on the slowest component number in the system.
In modern systems, that's invariably the hard drive system. It doesn't
matter whether you have the latest top-line CPU, or hottest new GPU
which are running close to the theoretical top 7.9 number, every system
these days will be stuck with a 5.9 rating if they use a hard drive to
boot up from. All modern hard drives are now stuck at the 5.9 rating
level, therefore all of the fastest HD-based systems are stuck at the
5.9 rating level. In fact, that speed rating is the same whether you
have a hard drive that's less than a year old, or if you have one from 5
years back; if you go back to 10 years ago, the speed rating might go
down to an insignificantly smaller 5.7 vs. 5.9. There are big
improvements in capacity year after year, but not in speed.

That is unless you go with an SSD as your boot device. SSD's seem to be
the only significant new speed-up technology on the storage front. In
the CPU and GPU realm, there have been great leaps and bounds made in
speed, but in storage it's been pretty static for years at a time.
 
Windows was never intended as a server OS. It still shows.
There is a lot that cannot be done on Windows without a shurdown.
Machines get slower and slower with uptime. I know a few people
that administrate Windows servers, and they usually do scheduled
reboots every 30 days or so.

Longest uptime I had with with Linux server/firewall box was
400 days, then I replaced the kernel. No issues at that
time despite constant i/o and network load during the day.
This experience is fairly typical. Still, for my desktop system
I shut down Linux as well. Hibernating is at the very least a
security risk and basically unneccessary. I do the same as you,
1-2 minutes are not hard to pass.

Linux is just as bad these days, at least desktop versions, like Ubuntu.
There's a constant barrage of updates, most don't require a reboot, but
whenever there's a new kernel update, that does require a reboot. And
there seems to be a kernel update every two weeks nowadays. It's damn
near impossible to keep Linux running constantly for more than a week
now without ignoring updates.

Yousuf Khan
 
Mike Tomlinson said:
To be fair, it was a very well thought-out network protocol which beat
the pants off CSMA/CD Ethernet.
The main problem was the prohibitive cost of network cards (often
containing a more powerful processor than the host system) and the MAUs
(concentrators). They suffered from IBM's propensity to over-engineer
everything.
Even though IBM developed it to run at 16Mbps (originally 4) and to use
two tokens instead of one, the advent of cheap Ethernet cards and cheap
UTP cabling eventually killed off Token Ring.

There was another problem: Inadequate resilience to card failure.
ATM was killed by the same thing plus extreme vendor egoism (fully
compatible only within one vendor).

The main driving point for Ethernet is that it is simple and there
is a good standard that does not leave critical grey areas.

Or in one sentence: Ethernet is infrastrucutre, while Token-Ring
was one vendores ego-trip.

Arno
 
Linux is just as bad these days, at least desktop versions, like Ubuntu.
There's a constant barrage of updates, most don't require a reboot, but
whenever there's a new kernel update, that does require a reboot. And
there seems to be a kernel update every two weeks nowadays. It's damn
near impossible to keep Linux running constantly for more than a week
now without ignoring updates.

My guess is thet these vendors think they have to do it this
way in order to keep their business. Go to Debian stable,
update automatically only from the security repo. This solves
the issue.

Of course if you want the colorful lights on your desktop (i.e.
the broken MS interface or one of its clones), forget about
stability.

Arno
 
To be fair, it was a very well thought-out network protocol which beat
the pants off CSMA/CD Ethernet.

The main problem was the prohibitive cost of network cards (often
containing a more powerful processor than the host system) and the MAUs
(concentrators). They suffered from IBM's propensity to over-engineer
everything.

Even though IBM developed it to run at 16Mbps (originally 4) and to use
two tokens instead of one, the advent of cheap Ethernet cards and cheap
UTP cabling eventually killed off Token Ring.

Not to mention the fact that Ethernet is hardly the underachieving
shared bus architecture it once was, switching hubs have now transformed
it into a full-blown star architecture. Even everyday home routers are
Ethernet switches now.

Yousuf Khan
 
Yousuf Khan said:
Yousuf Khan wrote
David Brown wrote [...]
I monitor the disk subsection of the Resource Monitor regularly, and very often when the disk is busy the Disk Queue
Length is over 1.00 (meaning more than 1 process is actively waiting on the disk) and the Active Time is pegged near
100%.

Doesnt mean that NCQ doesnt help in that situaiton.
When I'm talking about the disk queue being higher than 1.00, I don't
mean just something minor like 1.01, or 1.10, but I'm talking about
5.00, or even 10.00! There could be 10 process waiting on the disk queue
at any given time. This normally happens during boot-up time, but it
doesn't take very long for the disk queue to kick up to the stratosphere
at any time. Just a few apps trying to access the same disk at the same
time, and you got major delays.
Yousuf Khan

Seems tsomething was done here in 3.2 and moire maybe done in
the near future. Although from an article on LWN, its seems
the curent FS people have trouble understanding some of the
proposals made.

Arno

I'm sorry, I didn't understand a word you said, what are you talking about?

Yousuf Khan
 
Arno <[email protected]> said:
Ethernet is infrastrucutre,

The original Ethernet wasn't intended to be infrastructural, it was
invented to link together workstations at Xerox PARC. It ran at
2.94Mbps, and was developed further by DIX (Digital, Intel and Xerox)
and its speed increased to 10Mbps.
 
Back
Top