ASUS P4B266 Question

  • Thread starter Thread starter xyz
  • Start date Start date
xyz said:
Hi,

What's the fastest PIV CPU the board can handle?

I'm currently running 2.4 GHz Cel on my board. All 400FSB processors
should work, up to 2.6 GHz I suppose

OsmoJ
 
After 3.5 years of 24x7 service, my P2B-DS started having problems
detecting the onboard SCSI when booting. At first, it was once in a
while and another three finger boot would get me up and running again,
but now it's so bad that I have to three finger boot for a half an
hour before it will detect the onboard SCSI and boot.

Any ideas?
 
Ronald Cole said:
After 3.5 years of 24x7 service, my P2B-DS started having problems
detecting the onboard SCSI when booting. At first, it was once in a
while and another three finger boot would get me up and running again,
but now it's so bad that I have to three finger boot for a half an
hour before it will detect the onboard SCSI and boot.

Any ideas?

How is the power supply doing ? Have you checked the voltages lately
in the hardware monitor, via MBM or Asus Probe ?

HTH,
Paul
 
Ronald said:
After 3.5 years of 24x7 service, my P2B-DS started having problems
detecting the onboard SCSI when booting. At first, it was once in a
while and another three finger boot would get me up and running again,
but now it's so bad that I have to three finger boot for a half an
hour before it will detect the onboard SCSI and boot.

Any ideas?

Check the CMOS battery.
 
Just a long shot but try changing the SCSI cable for another one. I
had a problem with the same sort of thing with a RAID controller and
it turned out to be the cable. Drives disappeared every couple of
weeks to begin with and then got more frequent - ie. evry 4 hours.
Replaced the cable and now it works fine. It had worked for months
before this in same config with no problems....

You don't normally think of a cable as failing like this!!

Jason
 
Just a long shot but try changing the SCSI cable for another one. I
had a problem with the same sort of thing with a RAID controller and
it turned out to be the cable. Drives disappeared every couple of
weeks to begin with and then got more frequent - ie. evry 4 hours.
Replaced the cable and now it works fine. It had worked for months
before this in same config with no problems....

You don't normally think of a cable as failing like this!!

Once it's booted, it runs without problem until the next kernel
update. I'm going to replace the CMOS battery and see if that does
it. I prefer to change only one thing at a time so that it's easier
to identify the problem when it's resolved.
 
Just a long shot but try changing the SCSI cable for another one. I
had a problem with the same sort of thing with a RAID controller and
it turned out to be the cable. Drives disappeared every couple of
weeks to begin with and then got more frequent - ie. evry 4 hours.
Replaced the cable and now it works fine. It had worked for months
before this in same config with no problems....

You don't normally think of a cable as failing like this!!

Tried two different SCSI 68-pin LVD cables, no change.
 
Rubens said:
Check the CMOS battery.

Why would the CMOS battery have anything to do with the SCSI
controller being detected at boot time (other than they are close to
each other on the mb)?
 
How is the power supply doing ? Have you checked the voltages lately
in the hardware monitor, via MBM or Asus Probe ?

From the BIOS:

** Thermal Monitor **
CPU Temperature : 53 C/127 F
MB Temperature : 32 C/ 89 F
** Voltage Monitor **
VCORE Voltage : 1.7V
+3.3V Voltage : 3.2V
+5V Voltage : 4.9V
+12V Voltage : 12.1V
-12V Voltage : -12.5V
-5V Voltage : -5.1V

Is that "close enough for government work"? Or is it about to go
belly-up?
 
Ronald Cole said:
From the BIOS:

** Thermal Monitor **
CPU Temperature : 53 C/127 F
MB Temperature : 32 C/ 89 F
** Voltage Monitor **
VCORE Voltage : 1.7V
+3.3V Voltage : 3.2V
+5V Voltage : 4.9V
+12V Voltage : 12.1V
-12V Voltage : -12.5V
-5V Voltage : -5.1V

Is that "close enough for government work"? Or is it about to go
belly-up?

Looks good. No complaint there.

Are all the SCSI drives internal or external ? When disks spin up, they
draw 2 amps from the +12V rail, so that might change the supply voltages
a bit. If the +12V rail was collapsing, I suppose the only sign would be
a longer spinup time, as with the presence of +5V, the controller
on each (internal) drive should still be able to answer the probes
of the controller.

How about the health of the drives themselves ? Do they have any
detectable bad blocks (the "grown" list, as opposed to the "factory
defect" list), especially near the origin ? I don't know where
the mode pages are stored - maybe before sector 0 ? Do you have
another SCSI drive you can put on that bus to test ? If the spinup
time of the SCSI disk (until the status is READY) is too long, the
controller's timeout constant (the time it is willing to keep looking
at the drives) will be exceeded, and you'll "miss" them as the
machine tries to boot from whatever it did manage to detect.

Actually, when you think about it, the difference between cold
boot and the "three finger" reboot, is the drives are already
spinning in the latter case. I guess what is happening now, is
from the time the motherboard issues a SCSI reset, until the
drive answers with READY, is now so long that even if the
drive has the head-start of already being spun up, that is not
enough. (That sounds hard to believe...)

If the drive itself was flaky, you'd hear some "clunking" as the
drive tried to recalibrate to track 0 over and over again.

So, how does the drive itself sound ? I stopped using my last
SCSI disks when they started getting noisy (spindle noise).
Even though the drives had zero grown defects.

HTH,
Paul
 
Ronald Cole said:
After 3.5 years of 24x7 service, my P2B-DS started having problems
detecting the onboard SCSI when booting. At first, it was once in a
while and another three finger boot would get me up and running again,
but now it's so bad that I have to three finger boot for a half an
hour before it will detect the onboard SCSI and boot.

Any ideas?

Well, I found and fixed it. It WASN'T the cable. It WASN'T the cmos
battery. It WAS the bios. I got the board with 1008 and upgraded it
1012B a few years ago. I don't know whether EEPROMs can get "tired"
and flip bits, but an upgrade to 1013 "cured" the problem, whatever it
was exactly... go figure!
 
Ronald Cole said:
Well, I found and fixed it. It WASN'T the cable. It WASN'T the cmos
battery. It WAS the bios. I got the board with 1008 and upgraded it
1012B a few years ago. I don't know whether EEPROMs can get "tired"
and flip bits, but an upgrade to 1013 "cured" the problem, whatever it
was exactly... go figure!

EEPROMS can get bit rot, just like their UV erasable brothers.
But it is not something I would have expected to see after only
3.5 years.

Paul
 
Back
Top