Intel concedes server business to AMD ;-)

  • Thread starter Thread starter Robert Myers
  • Start date Start date
Well, I just posted a story talking about how the Netburst architecture
is likely going to disappear by 2006. Without Netburst's long
pipelines, Hyperthreading becomes difficult with the short-pipeline
architecture that's going to replace it. But Intel is going to try it
anyways.
I'm feeling a little slow today. There isn't nearly the payoff for
hyperthreading when things like branch mispredicts are less expensive
(in clock ticks) because of a shorter pipeline, but that doesn't mean
hyperthreading is harder to implement.

Looking at Intel's marketing materials and comparing the disappointing
payoff (hyperthreading doesn't seem to improve units of useful work
per watt or per transistor), I conclude that hyperthreading is a
marketing gimmick and that Intel knows that, so it doesn't much matter
what the real payoff is, as long as it isn't too negative.

RM
 
Sure, 64-bit Osen have been available for a bit. But, how many 64-bit
applications--designed for 64-bit--are you running?

Designed? Why? All the apps here are 64 bit.
Yea, then there's the compiler problem...

Why is that a problem? x86 is rather well known. Doubling the register
width (and more importantly depth) isn't rocket-surgery.

Are you piling a little more FUD here?
 
Why do you think that 4-5 years is an appropriate time frame to develop
compilers? I mean MSVC was crap for a while from what I hear, and
didn't get good until recently. How long has that been around?

Perhaps because there is money to/must be made? Leaving development
investment on the table to rot for five years isn't a good plan. The fact
is that is exactly what Intel/HP have done.

Intel/HP have bungled the whole deal, what makes you think they'd release
the information needed to do it "right"?
 
Almost anybody would have to vote for inept at this point wouldn't
they? Were I professer at the Harvard Business School, I'd probably
be looking around for some smart young whip to write a thesis on how
this always happens to technology companies, with Intel (and
Microsoft, btw) as only the latest case studies.

I'd read it, but I doubt you'd find enough people that would be willing to
talk to get enough information to fill a comic book. I do believe Intel
will try to bury the details (when was the first time you hread about
FS?)
A software type (take it easy, Keith, he worked for a do-it-all chip
company that I'll bet you've worked with, but no names) opined to me
recently that AMD's advantage is that they really have no choice but to
get outside help, so they get it. My interpolation is that, given the
state of the industry, the best is generally available for hire by AMD
unless they happen to work for Intel.

With? Me? I've never worked *with* any "do-it-all chip company". But I
do have to agree WRT AMD. Pretty smart cookies, eh?
I'd love to be able to go back and look at the historical documents at
Intel. What did they know and when did they know it? Even knowing what
they _thought_ they knew would make a fascinating read.

Do you really thing these things exist? Come on, no one allows records to
live past their useful life. Sockholders could find 'em in a discovery
action. I'd like to know what and when HP figured it out. ;-)
It isn't so much that they didn't get it right in four or five
years...the problem is that hard. It's hard to comprehend what they
were doing in terms of risk management, though. Not much, apparently.

Of course it's hard. The folks on CA and AFC were laughing at the
notion that it was even possible, given the state of the art.
These things had been tried before, and many in thouse gorups
were there when it was tried. Evidently Intel bet against the known art,
and lost.

There was recently a Gelato meeting dealing with getting some serious
help for gcc with Itanium. I haven't followed the details since the
meeting to see whether anything really happened.

I'll venture that no compiler solution to Itanium's problems is
forthcoming. Not that significant progress isn't possible. It just
won't be enough.



Gee, I heard that on CA at *least* five years ago. Some things never
change.
That means death for Itanium? I'm less sure of that. It depends on how
adventurous Intel is willing to be. The fact that execution pipes stall
frequently really shouldn't be a serious problem for server
applications. The idea that you have to keep a single pipe stuffed full
and running all the time is mainframe and HPC thinking.

What is the incentive for anyone to move their application to Itanic?
It's a business issue. What is the payoff?
The only thing Intel really has to preserve is the ISA. If you can add
more execution paths without blowing the power consumption through the
roof, you should be able to make practically any architecture get any
kind of throughput you want as a multi-threaded server.

The only thing Intel *has* is the ISA. Why is that an issue for Intel's
customers? What does that benefit *them*?
...That's the path (lots of threads, who cares if they stall) I thought
Intel was going to take with the core that I gather was being designed
at Marlborough. That core, apparently, is dead, so who knows what Intel
is going to do.

I've been wondering about that for three years.
Should be posting to comp.arch so I could get major flames.

Whatever, but they're more into software ickyness. ;-)
 
How is that weak? It's a well known fact that not having a good tool
chain can be a serious issue. I mean, let's be honest, when MIPS,
POWERPC, SPARC etc. debuted, it was pretty darn obvious that their
compilers were far less mature than those for S/360; don't you think
that made adoption of those architectures a little hard to swallow for
those used to S/360's development tools?
<snip>

I spent 30 years writing software for a major oil company, working on
IBM MVS/TSO and then Cray supercomputers, later Decstations (with MIPS
processors) and Sun Sparcs and IBM AIX and HPUX and finally on IBM
compatible PCs. Of all the platforms that I worked on, I would
consider IBM MVS/TSO to have the poorest development environment. The
compilers were fair to middling (if you like Fortran or Cobol), but
the "environment" sucked. And the development environment is at least
as important as the compilers themselves.
 
Too bad 64-bit SW is not further along. From what I've read the Itanium
really flys on that stuff. Just sucks really bad for 32-bit. That, and
it's a bit pricy.

Oh no... You just said the 'I' word... that's opening up a can of
worms!

The success and failure of Itanium has been discussed at length, but
suffice it to say that there is no one reason why it's utterly failed
to live up to expectations for commercial success.

The 64-bit vs. 32-bit thing isn't really accurate, it's more a
question of Itanium code (which is, by definition, 64-bit) and x86
code (which can be 16-bit, 32-bit or 64-bit). x86 code, regardless of
it's bit-ness, runs like crap on the Itanium. Even if this code were
64-bit (I don't think the Itanium currently supports 64-bit x86 code,
but there's no reason why the emulation couldn't add this feature) it
would still run like crap. Actually, to be fair, as far as emulation
goes, it runs exceptionally well, but that isn't saying much since all
emulation runs like crap.

The real problem that Itanium has though is that even running it's
native code it still isn't very fast. As often as not the fastest
Opteron and Xeon processors end up being faster than the Itanium when
each are running native code. As others have argued, part of this is
related to the compiler, but personally I think we really plateau-ed
on Itanium compiler development 3 years ago, from here on out the best
Intel can hope for is a few percent here or there on that front. In
either case, Only running as fast as Opteron/Xeon chips was enough to
kill Itanium on it's own, but when you combine it with the fairly
large additional cost involved and the fact that IBM's offering, the
Power4 and now Power5 chips, thoroughly smack the Itanium into the
ground, and you end up with a processor no one wants.

When you get right down to it, the Itanium may well have been a major
mistake. It's one of those architectures that looks great on paper
but just doesn't quite cut it when it comes to implementation in the
real world.
 
I'd say it's a fairly shrewd response, and it's more than what you
quoted:

1. AMD has made a short-sighted decision by sticking with DDR when
DDR2 is the longer term solution. Your response: buyers don't or
shouldn't care, because all the hassle will be someone else's problem.

Not at all, more that you WILL have to change the platform in a year
anyway, regardless of whether it's Intel or AMD, DDR, DDR2 or
whatever. Platforms do change, and they change regularly.

I like how Intel conveniently left out that their customers will need
to change their platform for dual-core chips while AMD customers will
not.
My only quibble with Intel's response would be that they didn't
_quite_ close the loop by saying: you remember how hard it's been
sometimes to get platforms for AMD processors you could really count
on? Each time you change memory technology, you have to start over.
FUD? Of course. This is marketing.

I still say that it's extremely weak FUD given that DDR2 currently
does nothing other than add ~$100 to the price tag of the system.
Even DDR2-667 hasn't really shown to improve performance over DDR400
by any noticeable margin. Hmm, what a surprise, increasing bandwidth
doesn't help any when you also increase latency.

As for AMD platforms, well now that's another question, but this FUD
is nothing new. AMD (and OEMs using AMD processors) have been dealing
with this "problem" (sometimes real, though usually just perceived)
ever since the first Athlon came out back in '90.
2. AMD processors don't have hyperthreading. Hyperthreading is
already in the benchmarks? Well, yes, but one of these days, they're
going to figure out how really to take advantage of HT. Honest.
Meanwhilst, the manager stuck with Intel processors that are
outperformed in benchmarks can say, "Yeah, but because of
hyperthreading, this is really like an 8-way box."

I didn't even comment on this because it was just stupid beyond
belief. First off, Intel chips don't have hyperthreading either, or
at least not their dual-core chips other than the Extremely Expensive
Edition.

Ohh, and the whole thing about how Hyperthreading will show MORE of an
advantage when you get multiple cores vs. single core chips. Whoever
said that was just smoking the crack-pipe I think.
3. AMD doesn't have the capacity to be Intel. That's the one that
really counts for managers with their thinking caps on.

Now this is the kicker, though again it's the same argument that AMD
has been dealing with since ~'97 when Fab25 came on-line for their K6
chips. Surprisingly, for the past 5 years it has been INTEL, and not
AMD, that has had product shortage problems. After yelling and
screaming for 7 or 8 years about AMD not being able to deliver only to
see AMD do a better job of delivering products than Intel, everyone
except Dell has stopped listening.
 
Tony Hill said:
I still say that it's extremely weak FUD given that DDR2 currently
does nothing other than add ~$100 to the price tag of the system.
Even DDR2-667 hasn't really shown to improve performance over DDR400
by any noticeable margin. Hmm, what a surprise, increasing bandwidth
doesn't help any when you also increase latency.

Didn't another company discover that a while back? Brambuss,
something like that? ;-)
 
On Thu, 12 May 2005 12:46:19 -0400, Robert Myers wrote:


Do you really thing these things exist?

Tech types are packrats. No staff type ever wrote a briefing saying
"This is what we expected, this is what we got, and here's why we
didn't get what we expected?"
Come on, no one allows records to
live past their useful life. Sockholders could find 'em in a discovery
action. I'd like to know what and when HP figured it out. ;-)


Of course it's hard. The folks on CA and AFC were laughing at the
notion that it was even possible, given the state of the art.
These things had been tried before, and many in thouse gorups
were there when it was tried. Evidently Intel bet against the known art,
and lost.
You've never been around when something that "everybody knew" turned
out to be wrong? "Everybody knew" you couldn't make features smaller
than the wavelength, for example.

IBM went through a similar phase: The only reason it hasn't been done
is because nobody else has done it right. We've got the money to do
it right.

The key problem that underlies at least some of Itanium's difficulties
is a core problem for IT and it isn't going to go away: how to
anticipate movement of data to minimize time on the critical path.
The same problem shows up with memory access, with disk access, and
now with serving web pages. The technology continues to move since
Itanium was dreamed up. And who knows if they even understood how
much of a problem getting the data in would be would be--I don't think
they did. I'd be grateful, in fact, for a citation that said that
they did. Itanium was already on the launching pad before people
started saying "Oh, my g*d" about the memory wall. Since Intel
effectively made the same decision *again* with NetBurst, though, one
is inclined to think that Intel continued to think it had a way to
beat the problem. Wonder who was lying to whom?

Gee, I heard that on CA at *least* five years ago. Some things never
change.
It's a moving target.
What is the incentive for anyone to move their application to Itanic?
It's a business issue. What is the payoff?


The only thing Intel *has* is the ISA. Why is that an issue for Intel's
customers? What does that benefit *them*?

The advantage is that it's not x86 and it's not made by IBM. Your bet
is that the industry will converge on x86. It may well, except for a
particularly lucrative part of the market. That's the part that IBM
has and that Intel wants. A different ISA won't help them to get it?
Apparently Intel thinks otherwise.

Why would *customers* care? Because they don't want their enterprise
workhorses running on a legacy PC processor that migrated upward. It
seems increasingly unlikely, but if Itanium ever does make it to the
desktop, it will be clear that it is migrating downward. You can
think that's irrelevant, if you like, but I don't. If you're going to
wheel out IBM big iron, you don't want to be wheeling in a PC to
replace it.

It's true. When I talk to people who _ought_ to be interested,
they're not. They've already tried it, found out that it's hard and
there's no real payoff, and moved on.

RM
 
Robert Myers said:
Tech types are packrats. No staff type ever wrote a briefing saying
"This is what we expected, this is what we got, and here's why we
didn't get what we expected?"

You don't know much about engineers do you? If the document or presentation
existed, there is a copy on somebody's C: or ~/ somewhere.
You've never been around when something that "everybody knew" turned
out to be wrong? "Everybody knew" you couldn't make features smaller
than the wavelength, for example.

IBM went through a similar phase: The only reason it hasn't been done
is because nobody else has done it right. We've got the money to do
it right.

And in the case of VLIW, I think IBM caught on pretty early in that process.
But maybe they were more humble than Intel by then. By the time Itanium hit
public notice, I wasn't hearing much about VLIW.
The key problem that underlies at least some of Itanium's difficulties
is a core problem for IT and it isn't going to go away: how to
anticipate movement of data to minimize time on the critical path.
The same problem shows up with memory access, with disk access, and
now with serving web pages. The technology continues to move since
Itanium was dreamed up. And who knows if they even understood how
much of a problem getting the data in would be would be--I don't think
they did. I'd be grateful, in fact, for a citation that said that
they did. Itanium was already on the launching pad before people
started saying "Oh, my g*d" about the memory wall. Since Intel
effectively made the same decision *again* with NetBurst, though, one
is inclined to think that Intel continued to think it had a way to
beat the problem. Wonder who was lying to whom?

Oh please. People have been talking about the memory wall and evaluating
its effects for a long time. Do you think computer architects just fell off
the turnip truck? Hell, we even heard about it up here on the frozen
tundra. The real error is the common one of believing someone who says
"This time it is different". Often, when someone says that it turns out not
to be true, whether talking about the stock market or computer architecture.

It's a moving target.

Some times projects with "invention required" don't work out.
The advantage is that it's not x86 and it's not made by IBM. Your bet
is that the industry will converge on x86. It may well, except for a
particularly lucrative part of the market. That's the part that IBM
has and that Intel wants. A different ISA won't help them to get it?
Apparently Intel thinks otherwise.

Why would *customers* care? Because they don't want their enterprise
workhorses running on a legacy PC processor that migrated upward. It
seems increasingly unlikely, but if Itanium ever does make it to the
desktop, it will be clear that it is migrating downward. You can
think that's irrelevant, if you like, but I don't. If you're going to
wheel out IBM big iron, you don't want to be wheeling in a PC to
replace it.

It's true. When I talk to people who _ought_ to be interested,
they're not. They've already tried it, found out that it's hard and
there's no real payoff, and moved on.

RM

It would be interesting to know what Intel's future Itanium plans are. HP
has a problem, having apparently tied their servers pretty solidly to
Itanium, if Intel de-prioritizes it. And didn't HP sell most of their
microprocessor folks to Intel?

del
 
Del said:
It would be interesting to know what Intel's future Itanium plans are. HP
has a problem, having apparently tied their servers pretty solidly to
Itanium, if Intel de-prioritizes it. And didn't HP sell most of their
microprocessor folks to Intel?

Small wonder that companies like Intel and Microsoft continue to
dominate industries, when their competitors are so incredibly feeble
that they fold their hands at the mere announcement of a new
"world-beating" product.
 
You don't know much about engineers do you? If the document or presentation
existed, there is a copy on somebody's C: or ~/ somewhere.

I don't know. Does Keith, who wrote that, know much about engineers?
I agree with you. Somebody's got the goods.
And in the case of VLIW, I think IBM caught on pretty early in that process.
But maybe they were more humble than Intel by then. By the time Itanium hit
public notice, I wasn't hearing much about VLIW.

Translation: IBM was in a fight for its life. Something about the
sight of the gallows and concentrating the mind.
Oh please. People have been talking about the memory wall and evaluating
its effects for a long time. Do you think computer architects just fell off
the turnip truck? Hell, we even heard about it up here on the frozen
tundra. The real error is the common one of believing someone who says
"This time it is different". Often, when someone says that it turns out not
to be true, whether talking about the stock market or computer architecture.

I date the term "memory wall" from a 1995 article in Compter
Architecture News by Wulf and McKee.

http://citeseer.ist.psu.edu/cache/p...ztechreportszSzCS-94-48.pdf/wulf95hitting.pdf

Engineers and computer architects probably talked about the problem
long before the article appeared, but the Wulf and McKee article
itself is a bit naive, as if cache were all about reuse and that there
would be nothing to be done about "first use" cache misses. Had that
turned out to be true, of course, faster processors would long ago
have become completely pointless, and everything would be massively
parallel processing by now.

I don't think anyone could forsee at that time what a big deal
out-of-order processing would turn out to be. In some ways, Intel/HP
just jumped the gun. Had they (for example) waited until the Pentium
II had been introduced, the future would have been much easier to
forsee, and it would be harder to forgive them for adopting an ISA
that would make competing even with x86 much more difficult.
Some times projects with "invention required" don't work out.

The compilers are getting better, just not fast enough compared to the
competition.

It would be interesting to know what Intel's future Itanium plans are. HP
has a problem, having apparently tied their servers pretty solidly to
Itanium, if Intel de-prioritizes it. And didn't HP sell most of their
microprocessor folks to Intel?
That begs the question of what HP's bigger picture view of the future
is. There must be more on the table than just what processor their
big boxes will use.

As to what Intel thinks, it's hard to imagine Intel disburdening
itself of Itanium while HP is still relying on it. Cutting HP (or
equivalent) loose would close off many futures to Intel.

RM
 
I date the term "memory wall" from a 1995 article in Compter
Architecture News by Wulf and McKee.

http://citeseer.ist.psu.edu/cache/p...ztechreportszSzCS-94-48.pdf/wulf95hitting.pdf

Engineers and computer architects probably talked about the problem
long before the article appeared, but the Wulf and McKee article
itself is a bit naive, as if cache were all about reuse and that there
would be nothing to be done about "first use" cache misses. Had that
turned out to be true, of course, faster processors would long ago
have become completely pointless, and everything would be massively
parallel processing by now.
That's the trouble with the future, it's so hard to predict. (someone
famous said that). Rather than "memory wall" if you search for "von neuman
bottleneck" it might go back a ways further. Certainly the guys that
invented caches had a clue. In IBM's case that was long about 360/85 time
or the late 60's. And the folks studying and simulating performance
certainly understood the issues. Of course folks got way more out of the
old way that anyone anticipated, but all things must come to an end.

del
 
Robert said:
I'm feeling a little slow today. There isn't nearly the payoff for
hyperthreading when things like branch mispredicts are less expensive
(in clock ticks) because of a shorter pipeline, but that doesn't mean
hyperthreading is harder to implement.

Looking at Intel's marketing materials and comparing the disappointing
payoff (hyperthreading doesn't seem to improve units of useful work
per watt or per transistor), I conclude that hyperthreading is a
marketing gimmick and that Intel knows that, so it doesn't much matter
what the real payoff is, as long as it isn't too negative.

I personally assume that they're going to be happy that they have
multicores and forget Hyperthreading.

Plus today it looks like a legitimate security hole in Hyperthreading
has been found, and possibly Intel might not be too happy pushing
Hyperthreading as a marketing gimick for much longer.

http://www.daemonology.net/papers/htt.pdf

Yousuf Khan
 
Robert said:
The advantage is that it's not x86 and it's not made by IBM. Your bet
is that the industry will converge on x86. It may well, except for a
particularly lucrative part of the market. That's the part that IBM
has and that Intel wants. A different ISA won't help them to get it?
Apparently Intel thinks otherwise.

Why would *customers* care? Because they don't want their enterprise
workhorses running on a legacy PC processor that migrated upward. It
seems increasingly unlikely, but if Itanium ever does make it to the
desktop, it will be clear that it is migrating downward. You can
think that's irrelevant, if you like, but I don't. If you're going to
wheel out IBM big iron, you don't want to be wheeling in a PC to
replace it.

It's true. When I talk to people who _ought_ to be interested,
they're not. They've already tried it, found out that it's hard and
there's no real payoff, and moved on.

These days, the enterprise is evolving towards upgraded PCs. It's a sad
sight, yes, but it's happening nonetheless. But it's not so bad, the
PCs are adopting features from mainframe and Unix servers as they move
up.

Yousuf Khan
 
Bandwidth only matters if you don't have enough and more helps not a bit.
Latency is forever.
Didn't another company discover that a while back? Brambuss, something
like that? ;-)

DamBu$$?
 
You don't know much about engineers do you? If the document or presentation
existed, there is a copy on somebody's C: or ~/ somewhere.

Well, since I said the above; yes I do know a little about engineers. ;-)
Do you save all your email in seperate folders so the disk munchers can't
find it when it expires? I don't bother. The PHBs don't want any
evidence. ;-)
 
Tech types are packrats. No staff type ever wrote a briefing saying
"This is what we expected, this is what we got, and here's why we
didn't get what we expected?"

Tech types are often overruled by PHB types. I'd expect these sorts of
discussions between high-level architects and the evidence would be
"supressed".
You've never been around when something that "everybody knew" turned
out to be wrong? "Everybody knew" you couldn't make features smaller
than the wavelength, for example.

I rather "know" that C is about as fast as it gets. I'm not investing in
a company that thinks it knows better.
IBM went through a similar phase: The only reason it hasn't been done is
because nobody else has done it right. We've got the money to do it
right.

After a few billion$ were spent by others, why don't we flush a few more
of our stock holder's. Money's cheap.
The key problem that underlies at least some of Itanium's difficulties
is a core problem for IT and it isn't going to go away: how to
anticipate movement of data to minimize time on the critical path. The
same problem shows up with memory access, with disk access, and now with
serving web pages. The technology continues to move since Itanium was
dreamed up. And who knows if they even understood how much of a problem
getting the data in would be would be--I don't think they did.

I guess Intel architects don't read CA? These issues were discussed there
whan Itanic was first announced.
I'd be
grateful, in fact, for a citation that said that they did. Itanium was
already on the launching pad before people started saying "Oh, my g*d"
about the memory wall. Since Intel effectively made the same decision
*again* with NetBurst, though, one is inclined to think that Intel
continued to think it had a way to beat the problem. Wonder who was
lying to whom?

Itanic may have been on the launching pad before people discussed this,
but these people discussed these problems as soon as they saw the monster.
Apparently you don't think Intel's architects are as sharp as those who
publicly post to CA.
It's a moving target.

It's dead Jim.
The advantage is that it's not x86 and it's not made by IBM. Your bet
is that the industry will converge on x86. It may well, except for a
particularly lucrative part of the market. That's the part that IBM has
and that Intel wants. A different ISA won't help them to get it?
Apparently Intel thinks otherwise.

Why does "made by IBM" make it bad? A proprietary architecture is a
proprietary architecture (Itanic much more so than PowerPC, in fact).
Why would one take applications from a more or less open architecture
(x86) to a closed one? Why would you trust your enterprise to Intel as
opposed to IBM? ...particularly when one has a tad more experience in the
market.

No, my bet is *not* that the industry will converge on x86. My bet is
that no x86 applications will move to other platforms in the forseeable
future. My bet is that x86 will live on far longer than Itanic. My bet
was that Itanic wasn't going to displace x86. My bet was that Intel
stubbed their toe with Itanic, and AMD gave them a gut-kick while they
weren't paying attention. So far I've been right.

If my bet were that x86 was everything there is, I'd be rather stupid
doing what I do. Well...
Why would *customers* care? Because they don't want their enterprise
workhorses running on a legacy PC processor that migrated upward. It
seems increasingly unlikely, but if Itanium ever does make it to the
desktop, it will be clear that it is migrating downward. You can think
that's irrelevant, if you like, but I don't. If you're going to wheel
out IBM big iron, you don't want to be wheeling in a PC to replace it.

That was certainly Intel's plan. Too bad it was fatally flawed by a
turkey of an architecture. Intel needed a bust-out architecture to pull
anything like the off, but chose one that needed more than a little
invention than they could handle. Meanwhile everyone else scaled up their
performance. ...sorta what Intel (with x86) did to RISC.
It's true. When I talk to people who _ought_ to be interested, they're
not. They've already tried it, found out that it's hard and there's no
real payoff, and moved on.

Meanwhile, AMD swiped Intel's lunch money. Intel doesn't show pretty
poor signs of catching up.
 
keith said:
Well, since I said the above; yes I do know a little about engineers. ;-)
Do you save all your email in seperate folders so the disk munchers can't
find it when it expires? I don't bother. The PHBs don't want any
evidence. ;-)

Nope, but I detach interesting .ppt and .pdf files and stash them on c: or
v: or h:
don't you?
 
Nope, but I detach interesting .ppt and .pdf files and stash them on c: or
v: or h:
don't you?

Rarely. I rarely get interesting files in emails (they're on web sites
also under PHB control). I was thinking more along the lines of strategy
and direction discussions, rather than the mundane bit-banging details.
 
Back
Top