turn off GPU on new chips??

  • Thread starter Thread starter GT
  • Start date Start date
G

GT

Is there a setting by which you can turn off the internal GPU chips on the
latest Core i5 and i7 processors. The reason behind my question is an
overclocking and heat point - if using an external graphics board, there is
no need to use the on-board graphics, so disabling that part of the
circuitry will automatically reduce heat output and it seems to me that if
there is more headroom for heat, then a higher overclock of the CPU should
be possible??
 
GT said:
Is there a setting by which you can turn off the internal GPU chips on the
latest Core i5 and i7 processors. The reason behind my question is an
overclocking and heat point - if using an external graphics board, there is
no need to use the on-board graphics, so disabling that part of the
circuitry will automatically reduce heat output and it seems to me that if
there is more headroom for heat, then a higher overclock of the CPU should
be possible??

You could check the BIOS on how to disable the onboard video. If there
isn't a disable option, it may be that you elect which order to detect
video devices, so if you elect an ordering where a PCI[e] card is
detected first then the onboard video won't be enabled.

Without seeing the circuit design and all the data sheets for the IGD
chip, I don't know if disabling the onboard video or IGD chip also
results in it not getting clocked. Since it is a shared bus, it's
likely the disabled IGD chip is still getting clocked. Whether that
clocking gets used or not will effect whether the chip produces heat or
not (i.e., it may still receiving clocking but not perform any
function).

I doubt the heat from the IGD chip is going to affect by how much you
can overclock your CPU. Better would be to research more effective
cooling: bigger fan, better heatsink, better airflow, using external air
directly into the case to go over the CPU's heatsink instead of using
the pre-heated air inside the case, switching to water cooling, etc.
 
VanguardLH said:
Without seeing the circuit design and all the data sheets for the IGD
chip, I don't know if disabling the onboard video or IGD chip also
results in it not getting clocked. Since it is a shared bus, it's
likely the disabled IGD chip is still getting clocked. Whether that
clocking gets used or not will effect whether the chip produces heat or
not (i.e., it may still receiving clocking but not perform any
function).

Given Intel's focus on performance:watt and power conservation in
general these days, I'd guess that the onboard GPUs don't have any
significant impact.

But that's just a guess.
 
GT said:
Is there a setting by which you can turn off the internal GPU chips on the
latest Core i5 and i7 processors. The reason behind my question is an
overclocking and heat point - if using an external graphics board, there is
no need to use the on-board graphics, so disabling that part of the
circuitry will automatically reduce heat output and it seems to me that if
there is more headroom for heat, then a higher overclock of the CPU should
be possible??

The datasheet for an Ivy Bridge, says the CPU has C0, C1/C1E, C3, and C6, while
the GPU has RC6 (Render C6) state. The GPU doesn't seem to have other C states.
And the datasheet, while it has transition diagrams for the CPU portion C states,
doesn't really go into that much detail.

[information from "3rd-gen-core-desktop-vol-1-datasheet.pdf"]

*******

This article, the author is able to include some data for the various states
he describes. And there's the usual hedging, where C6 implies all power on a
section of circuitry can be removed.

http://www.hardwaresecrets.com/arti...out-the-CPU-C-States-Power-Saving-Modes/611/6

"This allows the CPU internal voltage to be lowered to any value,
including 0 V, what would completely turn off the CPU when it is idle."

When current flows are quoted there, the current flow implies some voltage
is present. If the voltage was identically equal to zero, there would be
no current flow. So it sounds like a voltage is still being applied. The
description isn't completely consistent in that sense. And the CPU is unlikely
to have a MOSFET per core, to shut off the power. If such a feature existed,
and worked that way, there'd have to be a lot more "independent" external
phases. And shutting off a phase, isn't likely to "go to zero volts" in
100 microseconds.

*******

The Ubuntu people, seem to have three controls for the GPU RC6 state.
They would discover these things, by analysing the ACPI table sent by
the BIOS on a new motherboard, and then end up asking questions of
someone at Intel, as to what they mean.

https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/993187

"There are muliple RC6 modes. From "modinfo i915":

+------------
| parm: i915_enable_rc6:Enable power-saving render C-state 6. Different stages
| can be selected via bitmask values (0 = disable; 1 = enable rc6; 2 = enable deep rc6;
| 4 = enable deepest rc6). For example, 3 would enable rc6 and deep rc6, and 7 would
| enable everything. default: -1 (use per-chip default) (int)
+------------

Your log shows:

+------------
| [drm:intel_enable_rc6], Sandybridge: deep RC6 disabled
| [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
+------------

The kernel is automatically enabling the basic RC6 mode ("RC6 on") but not the
deep or deepest modes ("RC6p off, RC6pp off"). My suggested boot parameter will
override this, and disable all RC6 modes."

*******

I can't conclude anything, except to say, the GPU supports an (R)C6 state.

In theory, the GPU can flush state out of dynamically refreshed internal
memory, to a static RAM, so state won't be lost. This almost implies
the C6 state uses zero clock frequency as its power saving. (Many chips
use DRAM internally - if you drop the clock to zero hertz, the DRAM loses
state. That's why state info has to be moved to a separate static RAM.
Truly static RAM, takes more transistors per memory bit, which is why
it isn't used for everything.)

But unless the CPU has multiple VID interfaces, to control external
power level, and multiple power planes, I somehow doubt all this
talk of turning off power completely. The current flows involved
are many many amps. A MOSFET can certainly switch currents like that,
but the RdsOn would affect the ability to define precise voltage
levels within the processor. Using a MOSFET sounds like a recipe for
disaster. It would also take significant real estate, if actually
integrated onto the silicon die.

Now, if we look at the diagram in the Vol.1 datasheet, this is interesting.

http://img526.imageshack.us/img526/3813/3rdgenpowering.gif

VCCIO: 1.05V fixed (with one control bit to switch off ? VCCIO_SEL#)
VDDQ: 1.5V fixed
Vcore: SVID controlled ?
VAXG: SVID controlled ? Two separate phased ? Why ?
VCCSA: VCCSA_VID (System Agent voltage plane, not CPU, not GPU stuff)

OK, it turns out SVID is a "protocol", but the Intel site won't let you
have a copy. SVID is not like the previous VID bits used on other processors.
The old scheme, they were like GPIO pins.

So instead, to understand SVID, look to someone making a regulator
chip with it as an interface.

http://www.intersil.com/content/int...vrm-imvp/multiphase-controllers/ISL6364A.html

"Dual Outputs

* Output 1 (VR0): 1 to 4-Phase for Core or Memory
* Output 2 (VR1): Single Phase for Graphics, System Agent,or Processor I/O

Intel VR12/IMVP7 Compliant

* SerialVID with Programmable IMAX, TMAX, BOOT,ADDRESS OFFSET Registers
"

So Serial VID appears to be a way for a more complex conversation between
the processor and motherboard regulators. And potentially, is not limited
to single regulator control, as the old "look up table" approach used.
So in the above diagram, when SVID is shared by two separate regulator
controls, it's a control bus with a protocol on it, rather than just
static logic signals. So the bus can send commands to adjust more than
one regulator. It doesn't imply that Vcore and VAXG are "locked together",
as I'd originally guessed.

I can't get a datasheet from Intersil either, without "registering".
So no juicy details on SVID.

So the controls are there, for adjusting VAXG voltage level. That would
be used anyway, for switch the GPU from 3D running state to 2D running
state (lower clock).

The GPU sits on the "ring bus", but that doesn't necessarily imply
powering off the GPU would kill the ring. They could have a "station"
interface powered off the VAXG, and the ring bus could be powered
off VCCSA, such that if the GPU was turned off, it wouldn't affect
other things using the ring bus.

I'd say it's feasible to turn it off, but it's pretty hard to predict
whether that is done in practice. And you can never trust a datasheet,
to show the appropriate restrictions on operating conditions. One time
at work, it took *3 months* of back-and-forth emails with a supplier,
pleading for information on whether there were rule regarding turning
off the multiple rails on a particular chip, before we received an
answer they were truly independent.

To give an example of where they weren't independent, take the case
of failing AMD FX55 type processors. Enthusiasts determined, that if
you raised VCore, to protect the processor, you also had to raise VDimm
interface as well. (Otherwise, you'd kill your FX55.) It implied a
maximum allowed offset in voltage between VCore and Vdimm (both of which
connect to the CPU). Almost like once there was more than a diode drop
difference, current was flowing where it shouldn't. The AMD datasheet,
showed no such restriction (so it wasn't documented). In the past,
other devices, when they have restrictions like that, they use greater than,
less than, equal to symbols, to show power sequencing or operating limits
(so you don't ruin the chip, if the power supplies don't come up in the
right order). But most of the datasheets you see today, treat them all as
independent, even if they aren't. So we can't really rely on the voltage
specification section of an Intel document, to prove that in practice
the GPU can be turned off.

Summary: There are controls present there, to save a lot of power.
Exactly how much saving, who knows.

If you know where to poke with your multimeter, you can always try
and read out VAXG that way :-) It's either one or two phases
of regulation near the CPU socket.

Paul
 
Paul said:
GT said:
Is there a setting by which you can turn off the internal GPU chips on
the latest Core i5 and i7 processors. The reason behind my question is an
overclocking and heat point - if using an external graphics board, there
is no need to use the on-board graphics, so disabling that part of the
circuitry will automatically reduce heat output and it seems to me that
if there is more headroom for heat, then a higher overclock of the CPU
should be possible??

The datasheet for an Ivy Bridge, says the CPU has C0, C1/C1E, C3, and C6,
while
the GPU has RC6 (Render C6) state. The GPU doesn't seem to have other C
states.
And the datasheet, while it has transition diagrams for the CPU portion C
states,
doesn't really go into that much detail.

[information from "3rd-gen-core-desktop-vol-1-datasheet.pdf"]

*******

This article, the author is able to include some data for the various
states
he describes. And there's the usual hedging, where C6 implies all power on
a
section of circuitry can be removed.

http://www.hardwaresecrets.com/arti...out-the-CPU-C-States-Power-Saving-Modes/611/6

"This allows the CPU internal voltage to be lowered to any value,
including 0 V, what would completely turn off the CPU when it is
idle."

When current flows are quoted there, the current flow implies some voltage
is present. If the voltage was identically equal to zero, there would be
no current flow. So it sounds like a voltage is still being applied. The
description isn't completely consistent in that sense. And the CPU is
unlikely
to have a MOSFET per core, to shut off the power. If such a feature
existed,
and worked that way, there'd have to be a lot more "independent" external
phases. And shutting off a phase, isn't likely to "go to zero volts" in
100 microseconds.

*******

The Ubuntu people, seem to have three controls for the GPU RC6 state.
They would discover these things, by analysing the ACPI table sent by
the BIOS on a new motherboard, and then end up asking questions of
someone at Intel, as to what they mean.

https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/993187

"There are muliple RC6 modes. From "modinfo i915":

+------------
| parm: i915_enable_rc6:Enable power-saving render C-state 6.
Different stages
| can be selected via bitmask values (0 = disable; 1 = enable rc6; 2 =
enable deep rc6;
| 4 = enable deepest rc6). For example, 3 would enable rc6 and deep
rc6, and 7 would
| enable everything. default: -1 (use per-chip default) (int)
+------------

Your log shows:

+------------
| [drm:intel_enable_rc6], Sandybridge: deep RC6 disabled
| [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
+------------

The kernel is automatically enabling the basic RC6 mode ("RC6 on") but
not the
deep or deepest modes ("RC6p off, RC6pp off"). My suggested boot
parameter will
override this, and disable all RC6 modes."

*******

I can't conclude anything, except to say, the GPU supports an (R)C6 state.

In theory, the GPU can flush state out of dynamically refreshed internal
memory, to a static RAM, so state won't be lost. This almost implies
the C6 state uses zero clock frequency as its power saving. (Many chips
use DRAM internally - if you drop the clock to zero hertz, the DRAM loses
state. That's why state info has to be moved to a separate static RAM.
Truly static RAM, takes more transistors per memory bit, which is why
it isn't used for everything.)

But unless the CPU has multiple VID interfaces, to control external
power level, and multiple power planes, I somehow doubt all this
talk of turning off power completely. The current flows involved
are many many amps. A MOSFET can certainly switch currents like that,
but the RdsOn would affect the ability to define precise voltage
levels within the processor. Using a MOSFET sounds like a recipe for
disaster. It would also take significant real estate, if actually
integrated onto the silicon die.

Now, if we look at the diagram in the Vol.1 datasheet, this is
interesting.

http://img526.imageshack.us/img526/3813/3rdgenpowering.gif

VCCIO: 1.05V fixed (with one control bit to switch off ? VCCIO_SEL#)
VDDQ: 1.5V fixed
Vcore: SVID controlled ?
VAXG: SVID controlled ? Two separate phased ? Why ?
VCCSA: VCCSA_VID (System Agent voltage plane, not CPU, not GPU stuff)

OK, it turns out SVID is a "protocol", but the Intel site won't let you
have a copy. SVID is not like the previous VID bits used on other
processors.
The old scheme, they were like GPIO pins.

So instead, to understand SVID, look to someone making a regulator
chip with it as an interface.

http://www.intersil.com/content/int...vrm-imvp/multiphase-controllers/ISL6364A.html

"Dual Outputs

* Output 1 (VR0): 1 to 4-Phase for Core or Memory
* Output 2 (VR1): Single Phase for Graphics, System Agent,or Processor
I/O

Intel VR12/IMVP7 Compliant

* SerialVID with Programmable IMAX, TMAX, BOOT,ADDRESS OFFSET
Registers
"

So Serial VID appears to be a way for a more complex conversation between
the processor and motherboard regulators. And potentially, is not limited
to single regulator control, as the old "look up table" approach used.
So in the above diagram, when SVID is shared by two separate regulator
controls, it's a control bus with a protocol on it, rather than just
static logic signals. So the bus can send commands to adjust more than
one regulator. It doesn't imply that Vcore and VAXG are "locked together",
as I'd originally guessed.

I can't get a datasheet from Intersil either, without "registering".
So no juicy details on SVID.

So the controls are there, for adjusting VAXG voltage level. That would
be used anyway, for switch the GPU from 3D running state to 2D running
state (lower clock).

The GPU sits on the "ring bus", but that doesn't necessarily imply
powering off the GPU would kill the ring. They could have a "station"
interface powered off the VAXG, and the ring bus could be powered
off VCCSA, such that if the GPU was turned off, it wouldn't affect
other things using the ring bus.

I'd say it's feasible to turn it off, but it's pretty hard to predict
whether that is done in practice. And you can never trust a datasheet,
to show the appropriate restrictions on operating conditions. One time
at work, it took *3 months* of back-and-forth emails with a supplier,
pleading for information on whether there were rule regarding turning
off the multiple rails on a particular chip, before we received an
answer they were truly independent.

To give an example of where they weren't independent, take the case
of failing AMD FX55 type processors. Enthusiasts determined, that if
you raised VCore, to protect the processor, you also had to raise VDimm
interface as well. (Otherwise, you'd kill your FX55.) It implied a
maximum allowed offset in voltage between VCore and Vdimm (both of which
connect to the CPU). Almost like once there was more than a diode drop
difference, current was flowing where it shouldn't. The AMD datasheet,
showed no such restriction (so it wasn't documented). In the past,
other devices, when they have restrictions like that, they use greater
than,
less than, equal to symbols, to show power sequencing or operating limits
(so you don't ruin the chip, if the power supplies don't come up in the
right order). But most of the datasheets you see today, treat them all as
independent, even if they aren't. So we can't really rely on the voltage
specification section of an Intel document, to prove that in practice
the GPU can be turned off.

Summary: There are controls present there, to save a lot of power.
Exactly how much saving, who knows.

If you know where to poke with your multimeter, you can always try
and read out VAXG that way :-) It's either one or two phases
of regulation near the CPU socket.

Paul

Wow. Thanks Paul - you really have a lot of know how and spare time!
Excellent information.
 
Back
Top