65nm news from Intel

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Paul said:
There's been a great deal of progress in magnetic fusion in the last
few decades. Confinement parameters for current machines are orders
of magnitude ahead of where they were in the 1970s; understanding of
plasmas has also greatly advanced.

Whether tokamaks are going to be economically competitive is another
matter. Fortunately, there are exciting ideas for more compact
reactors.

I chose hot fusion as an example of a problem you'd _really_ like to be
able to solve, that you think that you ought to be able to solve, that
significant effort has gone into solving, but that you just haven't been
able to solve so far...to the point where it has become questionable
whether it is reasonable expect a satisfactory solution within a
forseeable future.

RM
 
But main point for desktop parallerism isn't about what cannot be
For some fairly trivial meaning of the word "important". While the
game market was miniscule until a couple of decades back, it has now
reached saturation. Yes, it dominates the benchmarketing, but it
doesn't dominate CPU design, and there are very good economic reasons
for that. Sorry, that one won't fly.

My view is that Benchmarketing WILL dominate consumer CPU:s. Twice the
cores,twice the performance that Joe Consumer will see. What I was
saying was that which is important in HOME USER perspective, which is
substantial part of X86 market don't you think. The normal businesses
have already stopped looking for top performance in their desktop
PC:s, and go for low cost options. The workstation market is
different, but many workstation apps do parallerize atleast for couple
of threads. My view is that what sells that they will deliver. Its
important because those numbers will determince whose computer is
faster in homeuser market, and business desktops have long gone the
idea of whose is faster. They look just OEM name and price and some
other variables like is it Intel-inside and go for celeron.
Besides people who write software will typicly have TWO years for
doubling the number of cores ;) [Except there is probably one shrink
that goes for increasing cache instead of number of cores, and that is
more probably earlier than later.]

Hmm. I have been told that more times than I care to think, over a
period of 30+ years. It's been said here, in the context of 'cheap'
computers at least a dozen times over the past decade. That one
won't even start moving.

Paying exponential amount of die area on logaritmic performance
increase vs, increasing number of cores is something that should get
when ondie caches are big enough. The reason why it should happen now,
is NOT that software people wan't it to happen, but its just that
doubling the number of cores Vs gaining 20% single thread performance
is something really important. 1st do you really think x86 core could
go much wider and give big performance from that? Do you think
lenghtening the pipeline for better clock speed would be possible
(over P4). No there is not enough ILP in x86 code, and costs of
circuitry extracting ILP goes up so much faster than the gained ILP
that its dead end too. So they have to turn to what they can increase
caches put ondie memory controllers and multiple cores, but after dual
core what they are going to do?

The ILP vs clockspeed vs coresize kind of question. The
Interconnection delays, and power density issues harm so that after a
certain point bigger core extract less ILP than it looses in
clockspeed. Well what does has to do with this. Well Interconnection
delays relative to transistorspeed will increase, AND that reduces the
optimal size for the core heavily. Trends are there, I can give you
figured that mr Demone deduced, that has to be taken with grain of
salt.

http://www.realworldtech.com/page.cfm?ArticleID=RWT062004172947&p=7

But if this happens as it looks, tha at 0.45u thats 2007 on intel
roadmap the optimal core size would be 20mm² rest is L2 cache and
other cores, and what ever they bring to the die. And intel seems to
keep desktop CPU die size about 100-200mm² So thats 4 cores and their
caches.

The reason for multicore is not because multithreading becomes
extremely usefull but because gaining single threaded performance
becomes MUCH harder.
There was essentially NO progress in the 1970s, and the 'progress'
since then has been AWAY FROM parallelism. With the minor exception
of Fortran 90/95.

There is already companies that use internal parallel languages for
their consumer products to cope with SSE, 3Dnow, and SMP. There ARE
parallel languages that are easy to use for application developement.
I'd say that when there is >500 million desktops with multicore CPU:s
(n>2) and highly competitive software market that runs on them that
needs performance as a distinctive method SOME ONE will see a business
opportunity. What I see is that there are millions of coders out there
who are looking for a solutions and still on dekstop the
syncronisation latencies will make their problem much easier than the
supercomputer folks, as multicore systems will have syncronization
latencies way lower than mainmemory latency. The progress to other
direction comes out of opportunity and necessity, not because what was
previous trend. When there is two cores as mainstream, and 4 cores in
roadmap, people who need the power on DESKTOP will go looking how to
use more threads. And at some point there will be parallel language,
out of necessity. Perhaps when there is 16 cores or more in
mainstream. But for 16 core to happen there should be a situation when
going from 8->16 gives more performance than a doubling of L2orL3
cache will as average case. Yes thats the real reason, there is not
much available for improving singlethreaded performance while keeping
the x86 ISA, and scaling trends hurt even more.
Your first sentence is optimistic, but not impossible. Your second
is what most of the experienced people have been saying. Multiple
cores will be used to run multiple processes (or semi-independent
threads) on desktops, for the forseeable future.

What I'm saying that performance limited applications on HIGHEND
systems, have semi-independent threads for 8 cores, as soon as there
is motivation to utilize that. People keep looking for how to make
things semi-independent, but at some point there has to be a better
way to write the parallel code, or there will be nothing extra
transistors could to do for performance improving than doubling of
ondie caches, after that. In 6-10 years there will be 16 core at
desktop. Unless there is some really disturband technology that gives
us MUCH better use for transistors. Like intel make 4 core EV8 for
desktop;) Or that the quantum computing gets to desktop, and makes
normal semiconductor devices obsolete [Very improbable;]

Jouni Osmala
 
- If you want N digits of accuracy in the numerical calculations, you
just need to use N digits of numerical precision, for O(N^2)
computational effort.

- However, quantizing time produces errors; if you want to reduce
these to N digits of accuracy, you need to use exp(N) time steps.

Is this right? Or is there any way to put a bound on the total error
introduced by time quantization over many time steps?

However, in many cases you're not interested in the values of the result
variables as such: you want to categorize the outcome of the experiment
in some way - e.g., the final configuration the protein you are simulating
is in, and an approximate time until a stable configuration is reached.
Whether that result corresponds to exactly those initial conditions you
set up or any of the simulated intermediate conditions is not really relevant,
as long as you can convince yourself that through simulating some set of
initial conditions, you get a statistically accurate view of the outcome in
a qualitative sense. Thus, it would be very valuable to be able to say, for
instance, that the presence of a certain "wrong" configuration of the
Alzheimer protein "catalyses" the folding of newly-made such protein into
the same "wrong" configuration - or to refute this hypothesis.

Jan
 
What kind of transaction - by itself - would take long enough to warrant
any transaction that goes off and does some data mining in the middle?

Ugh - another case of bad or incompetent design, then?

Jan
 
snip
There is already companies that use internal parallel languages for
their consumer products to cope with SSE, 3Dnow, and SMP. There ARE
parallel languages that are easy to use for application developement.

Can you give some examples of languages in each of these catagories? And
speculate about why, if they are easy to use and make parallel programming
much easier, then why aren't they the "standard" for high performance
computing?
 
Can you give some examples of languages in each of these catagories? And
speculate about why, if they are easy to use and make parallel programming
much easier, then why aren't they the "standard" for high performance
computing?

Or even used significantly in that area! Yes, PLEASE tell me about
those languages, as it really is rather relevant to my work.


Regards,
Nick Maclaren.
 
Robert Myers wrote:

(snip)
I chose hot fusion as an example of a problem you'd _really_ like to be
able to solve, that you think that you ought to be able to solve, that
significant effort has gone into solving, but that you just haven't been
able to solve so far...to the point where it has become questionable
whether it is reasonable expect a satisfactory solution within a
forseeable future.

Low oil prices over some years have decreased interest.

If oil prices stay near or higher than they are now, that would
be a big incentive to fusion work.

-- glen
 
Low oil prices over some years have decreased interest.

If oil prices stay near or higher than they are now, that would
be a big incentive to fusion work.

Our Lords and Masters in Washington and Whitehall are doing their
level best to arrange that. But Robert Myers is right - there has
been enough work that there are grounds for believing that the
problem is effectively insoluble.


Regards,
Nick Maclaren.
 
Nick Maclaren said:
But Robert Myers is right - there has
been enough work that there are grounds for believing that the
problem is effectively insoluble.

You mean "at the present time", correct? ;-).

Regards,
Dean
 
So, it is your position that it cannot be solved now, nor anytime in the
future?

Reread my posting. I said that there is good evidence that may
well be the case.


Regards,
Nick Maclaren.
 
Dean said:
So, it is your position that it cannot be solved now, nor anytime in the
future?

Hot fusion plainly has its ready defenders. The expectations for
programming multiprocessors are apparently low, with no apparent and
certainly no strenuous dissent.

RM
 
Robert said:
Hot fusion plainly has its ready defenders. The expectations for
programming multiprocessors are apparently low, with no apparent and
certainly no strenuous dissent.

Doesn't seem to stop people from trying. At least the cost of
admission is much lower than tokomac fusion research :-)

My previous message about these guys didn't elicit any response,
which I thought odd. I was sure it would raise at least a few
hackles. I think that I could do something useful with at least
the small version:

http://www.orionmulti.com/products/

My original post made it as far as google, at least:
http://groups.google.com/groups?q=a...TF-8&[email protected]&rnum=1

Cheers,
 
Andrew Reilly wrote:

My previous message about these guys didn't elicit any response, which I
thought odd. I was sure it would raise at least a few hackles. I think
that I could do something useful with at least the small version:

http://www.orionmulti.com/products/

My original post made it as far as google, at least:
http://groups.google.com/groups?q=a...TF-8&[email protected]&rnum=1

What's the figure of merit that makes this product attractive? It's an
x-86 cluster with gigabit ethernet interconnect.

RM
 
Robert said:
Andrew Reilly wrote:




What's the figure of merit that makes this product attractive? It's an
x-86 cluster with gigabit ethernet interconnect.

Flops/dollar, perhaps, but mostly flops/watt. Ultimately
flops/standard-wall-socket. Flops/cubic meter are probably pretty
good too. Oh, and you get to run your x86 cluster Linux code on
it, rather than recoding for the DSP farms that are the other
alternative for that sort of compute/watt or compute/volume. That
includes double precision maths, which most of the DSP farms
aren't good at.

Yeah, I do think that in-the-box gigabit ethernet was a weird
choice, as I said in my previous message. I wonder if you could
usefully use something like the hyperchannel switches that have
been mentioned here recently in a cache-incoherent mode, instead?
That could be even more interesting.

Cheers,
 
I.e., that run fastest on a one-processor Itanium or Opteron or
Xeon workstation...

On the other hand, who isn't drooling over these:

http://www.orionmulti.com/products/

I, for one, am *not* drooling. About the only thing this system has
going for it is a fairly low power consumption for the performance it
gets, but even than we're talking about ~200W vs. ~400W. Once Intel
and AMD get their dual-core chips out than this advantage will
disappear.

12 processors seems fast until you realize that the processors max out
at about 1/3rd of the performance of top-end processors and are often
down closer to 1/6th or worse! Even for their Linpack scores (a
fairly best-case sort of situation for Transmeta chips) you could
match the performance of the 12-processor system with a 4-processor
Opteron or Xeon setup.
Have to wonder why all of those nodes are hooked together (inside
the box, presumably on the motherboard) with gigabit ethernet,
rather than something like the Horus chipset that's been spoken
about here recently, given that the processors have HyperChannel
interfaces. My guess is that it let them offload system software
development onto the open source cluster community, without having
to even do device drivers.

Both software and hardware development is being offloaded here. It's
the cheap solution that will kinda-sorta work ok for the intended
task.
 
|> >
|> >>No. I said "insoluble", not "unsolved".
|> >
|> > So, it is your position that it cannot be solved now, nor anytime in the
|> > future?
|>
|> Hot fusion plainly has its ready defenders. The expectations for
|> programming multiprocessors are apparently low, with no apparent and
|> certainly no strenuous dissent.

Eh? I will dissent, strenuously, against such a sweeping statement!
My comment was about such programming by the mass of 'ordinary'
programmers, not about its use in HPC and embedded work (including
games).

And then there is Jouni Osmala ....


Regards,
Nick Maclaren.
 
Nick said:
No. I said "insoluble", not "unsolved".

You might be right, but that's still in the 'famous last words'
cathegory. :-)

I believe the relevant quote is something like this:

"When an established expert in a field tell you that something is
possible, he is almost certainly right, but when he tells you that
something is impossible, he is very likely wrong."

Terje
 
|> >
|> > No. I said "insoluble", not "unsolved".
|>
|> You might be right, but that's still in the 'famous last words'
|> cathegory. :-)

When the water in my kettle undergoes spontaneous cold fusion,
I will undoubtedly have spoken my last words :-)

But PLEASE remember that I said:

There has been enough work that there are grounds for believing
that the problem is effectively insoluble.

That is a MUCH weaker statement than saying that it is insoluble.

|> I believe the relevant quote is something like this:
|>
|> "When an established expert in a field tell you that something is
|> possible, he is almost certainly right, but when he tells you that
|> something is impossible, he is very likely wrong."

Yes. Apply that recursively :-)

More seriously, (a) I am not an expert (and a neutral lay appraiser
is often more likely to be correct than an expert), (b) far too many
experts in this field have been telling us for 50 years that all
they need for a solution is just a little
Regards,
Nick Maclaren.
more time and money and
(c) there are lashings of counter-examples to Clarke's Law. What
he SHOULD have said is more like:

When an established expert in a field tell you that something is
possible, he is almost certainly right, but when he tells you
that something is impossible, without giving a clear, simple,
draft proof why it is, he is very likely wrong.

There are people who have given such proofs, and have been wrong,
but it is relatively rare.
 
Back
Top