What's with Haswell with FIVR (Fully Integrated Voltage Regulator)and motherboards?

  • Thread starter Thread starter larrymoencurly
  • Start date Start date
L

larrymoencurly

The Intel Haswell CPUs have a voltage regulator built into the
package, so shouldn't that eliminate the need for the VRM around
the CPU socket? If so, why do some socket 1150 motherboards have
elaborate VRMs with 10-12 phases around the CPU?
 
The Intel Haswell CPUs have a voltage regulator built into the
package, so shouldn't that eliminate the need for the VRM around
the CPU socket? If so, why do some socket 1150 motherboards have
elaborate VRMs with 10-12 phases around the CPU?

The slide set still shows an "input VR" feeding the CPU.
You would think they would just apply +12V on the right hand
side of the first diagram, rather than some lower voltage.

http://hothardware.com/News/Haswell-Takes-A-Major-Step-Forward-Integrates-Voltage-Regulator/

The second slide on that page, says the external "input VR"
converts 12V to 2.4V. Then the "thing" in the CPU converts
2.4V to somewhere around a volt or so for the core. Using 2.4V
means they'll need to run a higher current into the processor,
which would take more "pins".

The slide here, mentions "ring coupled inductor topology".
A switching regulator would need inductors for energy storage,
so that's probably the secret sauce.

http://www.xbitlabs.com/news/cpu/di...rete_Weapon_Integrated_Voltage_Regulator.html

And the pictures there suggest the voltage regulator is
part of an MCM (multi-chip module) design. So the CPU die
doesn't have the VR on it. It's separate dies around the
processor. There's probably also a "sea of caps" on the
bottom surface of the motherboard, in the socket area.
Which would be part of the output filtering.

The cynic in me, says all of this is just to hide the VID signals :-)
So no overvolting potential. When the VID pins were exposed,
they were range limited by the design of the register inside
the CPU. But that didn't prevent "addition of boost" externally.
With the regulator inside, there's nothing to fiddle with.
And if you attempted to pump more voltage into the
VCore plane, the OVP probably shuts the box off. That's
what I'd do, just to piss off the overclockers :-) You
couldn't design an internal VRM without thinking about
having an OVP.

*******

You need an elaborate VRM, to sell motherboards. It's
the bling factor. Enthusiasts "know" a 12 phase motherboard
is 3x better than a 4 phase motherboard. Why, it could be,
that the extra magnetics aren't even hooked up (placebo effect) :-)

Paul
 
The slide set still shows an "input VR" feeding the CPU.
You would think they would just apply +12V on the right hand
side of the first diagram, rather than some lower voltage.

http://hothardware.com/News/Haswell-Takes-A-Major-Step-Forward-Integrates-Voltage-Regulator/

The second slide on that page, says the external "input VR"
converts 12V to 2.4V. Then the "thing" in the CPU converts
2.4V to somewhere around a volt or so for the core. Using 2.4V
means they'll need to run a higher current into the processor,
which would take more "pins".

The slide here, mentions "ring coupled inductor topology".
A switching regulator would need inductors for energy storage,
so that's probably the secret sauce.
http://www.xbitlabs.com/news/cpu/di...rete_Weapon_Integrated_Voltage_Regulator.html

And the pictures there suggest the voltage regulator is
part of an MCM (multi-chip module) design. So the CPU die
doesn't have the VR on it. It's separate dies around the
processor. There's probably also a "sea of caps" on the
bottom surface of the motherboard, in the socket area.
Which would be part of the output filtering.

The cynic in me, says all of this is just to hide the VID signals :-)
So no overvolting potential. When the VID pins were exposed,
they were range limited by the design of the register inside
the CPU. But that didn't prevent "addition of boost" externally.
With the regulator inside, there's nothing to fiddle with.
And if you attempted to pump more voltage into the
VCore plane, the OVP probably shuts the box off. That's
what I'd do, just to piss off the overclockers :-) You
couldn't design an internal VRM without thinking about
having an OVP.

*******

You need an elaborate VRM, to sell motherboards. It's
the bling factor. Enthusiasts "know" a 12 phase motherboard
is 3x better than a 4 phase motherboard. Why, it could be,
that the extra magnetics aren't even hooked up (placebo effect) :-)

Couldn't they just decorate the mobo more, as MSI did with its Big Bang
Xpower II, complete with bullets and Gatling gun for better cooling:

http://vr-zone.com/uploads/14361/P1020933.jpg

No deception here -- it was 100% honest to install 7 PCI-E 16x sockets
but connect only half the pins for 4 of them, making them essentially
just PCI-E 8x slots:

http://assets.vr-zone.net/14361/P1020872.jpg


I was hoping that Intel's Haswell voltage regulator would handle +12V
directly because I recently bought a cheap MSI motherboard with 3-phase
voltage regulator), and a tabulation at Overclockers.net showed MSI led
in blown-out CPU voltage regulators, despite not leading in sales. :(
 
Couldn't they just decorate the mobo more, as MSI did with its Big Bang
Xpower II, complete with bullets and Gatling gun for better cooling:

http://vr-zone.com/uploads/14361/P1020933.jpg

Now that, I like. Someone creative works there :-)
No deception here -- it was 100% honest to install 7 PCI-E 16x sockets
but connect only half the pins for 4 of them, making them essentially
just PCI-E 8x slots:

http://assets.vr-zone.net/14361/P1020872.jpg

Again, creative. I like how they took x16 connector
shells, and sub-filled the pins on them. It looks
like the connectors just don't have all their pins.

There was another creative connector invention, which was
an x4 connector, with a saw cut in the non-faceplate
end, so you could plug a x16 card into the x4 slot, and
it just hung over the end. They could have used that
deception, but it stands out a lot more clearly.
I was hoping that Intel's Haswell voltage regulator would handle +12V
directly because I recently bought a cheap MSI motherboard with 3-phase
voltage regulator), and a tabulation at Overclockers.net showed MSI led
in blown-out CPU voltage regulators, despite not leading in sales. :(

It's strange to use 2.4V as an intermediate voltage. Maybe someday
I'll wander into the appropriate Intel doc, with a justification
for such a voltage. I think it would be a much more amusing
design, if it accepted 12V directly. You'd need hardly any
pins for the input current level if they did that. And even
fewer external components.

Maybe they aren't using MOSFETs in that design ? You'd think
that low an input voltage, you'd have trouble turning the
MOSFETs fully on and off. So maybe some other transistor
type is being used.

Paul
 
I'm dubious of those slides if they are actually implying they have
incorporated a switcher at 140MHz as a component on the MCM.

Regarding the regulators on the mobo, often using multiple phases is
cheaper. It all depends on component pricing. That is, using lower power
components and lots of them (i.e. phases) versus fewer components that
can take more power. Given how switchers about filter caps, I rather
have more phases. Using multiple phases reduces the physical size of the
individual components, making the board lower profile, which helps air
flow.

I never worked at a passive component company (just semis), but often
big means expensive due to yield reduction. For chips, the bigger the
die the greater the odds it hits a crystal defect site. [Really a
problem in bipolar due to vertical current flow, less so in CMOS). I'm
presuming for a filter cap, as it gets bigger there is a greater chance
of a defect as well, so more smaller components can have a higher yield
than one big component.

All that said, I don't aggravate myself over the number of phases in a
board I am purchasing. I have to assume whatever number of phases they
picked was out of performance and economics.
 
miso said:
I'm dubious of those slides if they are actually implying they have
incorporated a switcher at 140MHz as a component on the MCM.

Regarding the regulators on the mobo, often using multiple phases is
cheaper. It all depends on component pricing. That is, using lower power
components and lots of them (i.e. phases) versus fewer components that
can take more power. Given how switchers about filter caps, I rather
have more phases. Using multiple phases reduces the physical size of the
individual components, making the board lower profile, which helps air
flow.

I never worked at a passive component company (just semis), but often
big means expensive due to yield reduction. For chips, the bigger the
die the greater the odds it hits a crystal defect site. [Really a
problem in bipolar due to vertical current flow, less so in CMOS). I'm
presuming for a filter cap, as it gets bigger there is a greater chance
of a defect as well, so more smaller components can have a higher yield
than one big component.

All that said, I don't aggravate myself over the number of phases in a
board I am purchasing. I have to assume whatever number of phases they
picked was out of performance and economics.

I don't see the 30MHz to 140MHz as being an outrageous figure.
This regulator is done in silicon, the article gives the impression
the inductors are integrated. And each cell contributes a small
portion of the needed power. So switching at a high
frequency isn't all that surprising.

I think the idea is overkill for desktops, but
that's probably not why Intel is doing this.

Paul
 
miso said:
I'm dubious of those slides if they are actually implying they have
incorporated a switcher at 140MHz as a component on the MCM.

Regarding the regulators on the mobo, often using multiple phases is
cheaper. It all depends on component pricing. That is, using lower
power components and lots of them (i.e. phases) versus fewer
components that can take more power. Given how switchers about filter
caps, I rather have more phases. Using multiple phases reduces the
physical size of the individual components, making the board lower
profile, which helps air flow.

I never worked at a passive component company (just semis), but often
big means expensive due to yield reduction. For chips, the bigger the
die the greater the odds it hits a crystal defect site. [Really a
problem in bipolar due to vertical current flow, less so in CMOS). I'm
presuming for a filter cap, as it gets bigger there is a greater
chance of a defect as well, so more smaller components can have a
higher yield than one big component.

All that said, I don't aggravate myself over the number of phases in a
board I am purchasing. I have to assume whatever number of phases they
picked was out of performance and economics.

I don't see the 30MHz to 140MHz as being an outrageous figure.
This regulator is done in silicon, the article gives the impression
the inductors are integrated. And each cell contributes a small
portion of the needed power. So switching at a high
frequency isn't all that surprising.

I think the idea is overkill for desktops, but
that's probably not why Intel is doing this.

Paul

There is a bit more to a switchmode power supply than just switching.
When you talk high speed switcher, it is generally a few MHz, not in the
hundred MHz range. For stability, you are going to need a GBWP over a
GHz in the error amp, and comparators that switch in a few nanoseconds.
Since intel has SiGe technology, they can probably do that, but nobody
likes to switch fast since they tend to make low efficiency converters.
Plus why would you want all that noise generated on your module, not to
mention heat.
 
miso said:
miso said:
I'm dubious of those slides if they are actually implying they have
incorporated a switcher at 140MHz as a component on the MCM.

Regarding the regulators on the mobo, often using multiple phases is
cheaper. It all depends on component pricing. That is, using lower
power components and lots of them (i.e. phases) versus fewer
components that can take more power. Given how switchers about filter
caps, I rather have more phases. Using multiple phases reduces the
physical size of the individual components, making the board lower
profile, which helps air flow.

I never worked at a passive component company (just semis), but often
big means expensive due to yield reduction. For chips, the bigger the
die the greater the odds it hits a crystal defect site. [Really a
problem in bipolar due to vertical current flow, less so in CMOS). I'm
presuming for a filter cap, as it gets bigger there is a greater
chance of a defect as well, so more smaller components can have a
higher yield than one big component.

All that said, I don't aggravate myself over the number of phases in a
board I am purchasing. I have to assume whatever number of phases they
picked was out of performance and economics.

I don't see the 30MHz to 140MHz as being an outrageous figure.
This regulator is done in silicon, the article gives the impression
the inductors are integrated. And each cell contributes a small
portion of the needed power. So switching at a high
frequency isn't all that surprising.

I think the idea is overkill for desktops, but
that's probably not why Intel is doing this.

Paul

There is a bit more to a switchmode power supply than just switching.
When you talk high speed switcher, it is generally a few MHz, not in the
hundred MHz range. For stability, you are going to need a GBWP over a
GHz in the error amp, and comparators that switch in a few nanoseconds.
Since intel has SiGe technology, they can probably do that, but nobody
likes to switch fast since they tend to make low efficiency converters.
Plus why would you want all that noise generated on your module, not to
mention heat.

We had a small switcher at work (size of your thumb) that ran at 10MHz.
That's why I don't think of 30MHz as being that big a deal. And ours
was done years ago - as a custom design by our power supply group.
A group my company later sold off, leaving us without a custom
design capability.

Knowing Intel, they probably targeted an ordinary CMOS process
to make this thing. For reasons of economy. Nothing exotic. Or
at leas, a minimal number of additional process steps.

There are silicon processes, where you can get just about any
component you want. BICMOS processes used to be
the tech of choice - you could have caps, inductors, Schottky
diodes, just about anything you wanted, but not with large
values or anything. And that wasn't even intended as a mixed
analog digital environment or anything. But companies eventually
threw away that capability, because it took so many more
process steps. Still, while it lasted, it was pretty amazing stuff.
The cost of making the chips, forces us into the bland CMOS world.
It's my guess, that when Intel made those internal regulators,
it wanted something it could make on a regular CMOS line,
with only enough extra process steps to make the inductors say.

I think that's the reason it runs at 2.4V input - because it's
actually a not-so-old CMOS process, and if you go higher in voltage
than that, you'd have a problem with breakdown voltage. Otherwise,
if they had a choice, I don't think they picked 2.4V for fun.
If the technology was really good, they would be running it
at 12V, and avoiding the VRM on the motherboard to convert
12V to 2.4V.

Paul
 
miso said:
miso wrote:
I'm dubious of those slides if they are actually implying they have
incorporated a switcher at 140MHz as a component on the MCM.

Regarding the regulators on the mobo, often using multiple phases is
cheaper. It all depends on component pricing. That is, using lower
power components and lots of them (i.e. phases) versus fewer
components that can take more power. Given how switchers about filter
caps, I rather have more phases. Using multiple phases reduces the
physical size of the individual components, making the board lower
profile, which helps air flow.

I never worked at a passive component company (just semis), but often
big means expensive due to yield reduction. For chips, the bigger the
die the greater the odds it hits a crystal defect site. [Really a
problem in bipolar due to vertical current flow, less so in CMOS). I'm
presuming for a filter cap, as it gets bigger there is a greater
chance of a defect as well, so more smaller components can have a
higher yield than one big component.

All that said, I don't aggravate myself over the number of phases in a
board I am purchasing. I have to assume whatever number of phases they
picked was out of performance and economics.

I don't see the 30MHz to 140MHz as being an outrageous figure.
This regulator is done in silicon, the article gives the impression
the inductors are integrated. And each cell contributes a small
portion of the needed power. So switching at a high
frequency isn't all that surprising.

I think the idea is overkill for desktops, but
that's probably not why Intel is doing this.

Paul

There is a bit more to a switchmode power supply than just switching.
When you talk high speed switcher, it is generally a few MHz, not in
the hundred MHz range. For stability, you are going to need a GBWP
over a GHz in the error amp, and comparators that switch in a few
nanoseconds. Since intel has SiGe technology, they can probably do
that, but nobody likes to switch fast since they tend to make low
efficiency converters. Plus why would you want all that noise
generated on your module, not to mention heat.

We had a small switcher at work (size of your thumb) that ran at 10MHz.
That's why I don't think of 30MHz as being that big a deal. And ours
was done years ago - as a custom design by our power supply group.
A group my company later sold off, leaving us without a custom
design capability.

Knowing Intel, they probably targeted an ordinary CMOS process
to make this thing. For reasons of economy. Nothing exotic. Or
at leas, a minimal number of additional process steps.

There are silicon processes, where you can get just about any
component you want. BICMOS processes used to be
the tech of choice - you could have caps, inductors, Schottky
diodes, just about anything you wanted, but not with large
values or anything. And that wasn't even intended as a mixed
analog digital environment or anything. But companies eventually
threw away that capability, because it took so many more
process steps. Still, while it lasted, it was pretty amazing stuff.
The cost of making the chips, forces us into the bland CMOS world.
It's my guess, that when Intel made those internal regulators,
it wanted something it could make on a regular CMOS line,
with only enough extra process steps to make the inductors say.

I think that's the reason it runs at 2.4V input - because it's
actually a not-so-old CMOS process, and if you go higher in voltage
than that, you'd have a problem with breakdown voltage. Otherwise,
if they had a choice, I don't think they picked 2.4V for fun.
If the technology was really good, they would be running it
at 12V, and avoiding the VRM on the motherboard to convert
12V to 2.4V.

Paul

BiCMOS is no big deal. I don't know why you say it isn't being used. I
mentioned SiGe because intel has that technology. I didn't mention that
I have designed power supply chips in NMOS, CMOS, and BiCMOS because I
wasn't trying to pull rank. I'm just saying nobody is even at 10MHz
because the market isn't there. There are core losses to deal with. And
integrated inductors. There just isn't going to be much energy storage
there.

Basically I would like to see this in an Intel paper rather than some
press release by somebody who probably doesn't know what they are
writing about.

I will ask around and see if anyone knows about this Haswell integrated
SMPS.
 
miso said:
BiCMOS is no big deal. I don't know why you say it isn't being used. I
mentioned SiGe because intel has that technology. I didn't mention that
I have designed power supply chips in NMOS, CMOS, and BiCMOS because I
wasn't trying to pull rank. I'm just saying nobody is even at 10MHz
because the market isn't there. There are core losses to deal with. And
integrated inductors. There just isn't going to be much energy storage
there.

Basically I would like to see this in an Intel paper rather than some
press release by somebody who probably doesn't know what they are
writing about.

I will ask around and see if anyone knows about this Haswell integrated
SMPS.

OK, I think you'll get a kick out of this. This is the first
good reference I could find.

http://www.psma.com/sites/default/f...ully-integrated-silicon-voltage-regulator.pdf

Paul
 
I came across that last night and emailed it to a few people in the biz.
Nobody knows if the product is real or not. All they say is intel
wouldn't lie. Note the efficiency is speculated. Either you have working
silicon or you don't. But the speculated 82% efficiency isn't all that
great. I won't mention the manufacturer, but I found an efficiency curve
in a chip that was shall we say speculation. [Old school engineers trust
one thing: a scope photograph from an analog scope. Once we went to
digital instruments, the trust factor went out the window. Data from
spreadsheets...don't get me started.]

About the only thing useful in putting this regulator on the MCM is the
ability to ramp the current quickly. But that needs to be studied
against the alternative of putting a lot of capacitance on the MCM, i.e.
a local storage to get around the lead inductance, then using a garden
variety regulator on the outside.

When you design a switcher chip, the cost of the external components is
your problem, because ultimately it is the customer's problem. So you
aim the initial specification at the cheapest solution that does the
job. That is why you don't find much in the way of controllers beyond a
few MHz.

Intel isn't in the position of producing the cheapest solution since
they own the market these days thanks to AMD slipping. So I suppose they
could produce a more expensive solution since where else are you going
to go for a competitive product.

One presume you cant go to TSMC and get NiFe inductors on your wafer, so
I don't see this being the dominant technology, even if it works.
 
miso said:
I came across that last night and emailed it to a few people in the biz.
Nobody knows if the product is real or not. All they say is intel
wouldn't lie. Note the efficiency is speculated. Either you have working
silicon or you don't. But the speculated 82% efficiency isn't all that
great. I won't mention the manufacturer, but I found an efficiency curve
in a chip that was shall we say speculation. [Old school engineers trust
one thing: a scope photograph from an analog scope. Once we went to
digital instruments, the trust factor went out the window. Data from
spreadsheets...don't get me started.]

About the only thing useful in putting this regulator on the MCM is the
ability to ramp the current quickly. But that needs to be studied
against the alternative of putting a lot of capacitance on the MCM, i.e.
a local storage to get around the lead inductance, then using a garden
variety regulator on the outside.

When you design a switcher chip, the cost of the external components is
your problem, because ultimately it is the customer's problem. So you
aim the initial specification at the cheapest solution that does the
job. That is why you don't find much in the way of controllers beyond a
few MHz.

Intel isn't in the position of producing the cheapest solution since
they own the market these days thanks to AMD slipping. So I suppose they
could produce a more expensive solution since where else are you going
to go for a competitive product.

One presume you cant go to TSMC and get NiFe inductors on your wafer, so
I don't see this being the dominant technology, even if it works.

This stuff is already in production.

The slide deck presumably pre-dates the production release.

Since the slide deck was produced by someone on the research side,
it's hard to say how Intel views this from the business end. There
is an area saving by doing it that way. Which means smaller devices
might be designed (like the Intel NUC).

Paul
 
Back
Top