X
xfile
[...],you will nerve get as good performance with a CPU with 2 cores as you
Hi,
No offense and the following are based on my limited knowledge
That may not be always the case, and may not be at this time.
Two physical units is always less desirable (for performance and other
factors) than one physical unit for many reasons - skip for simplicity.
I will suggest OP go to Intel, AMD, or Microsoft to see if there are any
benchmark comparisons to see the current performance of CPU's vs. various
applications including OSes.
In terms of cooperation, scheduling (determine which instruction sets to be
processed by which units) or synchronization (when jobs have been processed
and returned for use), it is always needed and largely based on the quality
of algorithms used to determine the speed and efficiency, and again, among
other physical factors such as shorter distance is better (so two physical
is farer than two in one die) and others.
By no means I am a CPU expert, but in fact, the basic CPU design theory is
very easy to understand and it is based on "production" theory, whereas each
processing unit is literally treated as a processing unit in production line
and jobs are instruction sets delivered by applications via users and/or
application itself. So fundamentally, it is using the same job scheduling
algorithms as would have been used for production. But just for job
scheduling algorithm part, it is complicated enough, and the most
interesting thing is, there is no "absolute" answer for which one is "right"
and it is evolving every day. More than 10 years ago, I once studied nearly
500 scheduling algorithms for getting an ultimate result at that time. But
even at that time, that was not a "thorough" study, and probably was an
"ultimate" result. I mentioned this just to show how complicated it is, but
not to imply I know a lot.
So one can skip all the "high tech" part of CPU, and relate their knowledge
of manufacturing production and/or other job scheduling for how CPU might
have done the job - but of course, it will be enough to capture the concepts
but details are much more complicated.
In terms of performance of Vista using two physical units or multiple cores,
I'll be interesting to learn if I have time for doing that, and if anyone
care to share based on your experience. But for what I have learned from
some papers in Intel, their multiple cores design was without consideration
of when, or if, Vista will be released. So that gives me some doubts about
how Vista will perform, but maybe they have synchronized the upper level of
architecture long ago, so the rest are on functionalities level.
Back to OP's question, in theory, it is as you think - it wouldn't care and
once it delegated, it will be CPU to handle to job. Now the question is,
how does it know, and if it actually knows, how many processing units are
there AND if its scheduling algorithms are good enough for dispatching the
jobs (so jobs won't be idle) and when results are returned to be further
processed by OS (and application). The first part of the question is why I
asked the previous question about if it's by design or license issue, and
appreciate those who answered, so we know basic version might not even aware
there is an additional unit or won't use it if it is there (if I understood
the answer correctly) which means job dispatching won't make any difference
for those versions. The second part of question will be those who design
Windows and have some benchmark results to tell.
So to boil down the above bla bla bla, the bottlenecks will be the
application's (OS included) scheduling and synchronization abilities plus
those of the CPU to get the performance you want - that is one of reasons,
multiple CPUs (multiple cores or physical ones) are rarely used on PC level
(other than PC server) - but that was before - and anything might be
different at this and future point.
Again, the above is based on my very rough knowledge and welcome additional
corrections and comments.
would with 2 single core CPUs due to many physical constraints on the
throughput and the underlying architecture of multi core chips. But this is
a sweeping generalisation.
Hi,
No offense and the following are based on my limited knowledge

That may not be always the case, and may not be at this time.
Two physical units is always less desirable (for performance and other
factors) than one physical unit for many reasons - skip for simplicity.
I will suggest OP go to Intel, AMD, or Microsoft to see if there are any
benchmark comparisons to see the current performance of CPU's vs. various
applications including OSes.
But at a basic/simplistic level 2 real CPUs will perform better then 2
CPUs on the same die sharing components and having to "cooperate" to a
certain degree
In terms of cooperation, scheduling (determine which instruction sets to be
processed by which units) or synchronization (when jobs have been processed
and returned for use), it is always needed and largely based on the quality
of algorithms used to determine the speed and efficiency, and again, among
other physical factors such as shorter distance is better (so two physical
is farer than two in one die) and others.
By no means I am a CPU expert, but in fact, the basic CPU design theory is
very easy to understand and it is based on "production" theory, whereas each
processing unit is literally treated as a processing unit in production line
and jobs are instruction sets delivered by applications via users and/or
application itself. So fundamentally, it is using the same job scheduling
algorithms as would have been used for production. But just for job
scheduling algorithm part, it is complicated enough, and the most
interesting thing is, there is no "absolute" answer for which one is "right"
and it is evolving every day. More than 10 years ago, I once studied nearly
500 scheduling algorithms for getting an ultimate result at that time. But
even at that time, that was not a "thorough" study, and probably was an
"ultimate" result. I mentioned this just to show how complicated it is, but
not to imply I know a lot.
So one can skip all the "high tech" part of CPU, and relate their knowledge
of manufacturing production and/or other job scheduling for how CPU might
have done the job - but of course, it will be enough to capture the concepts
but details are much more complicated.
In terms of performance of Vista using two physical units or multiple cores,
I'll be interesting to learn if I have time for doing that, and if anyone
care to share based on your experience. But for what I have learned from
some papers in Intel, their multiple cores design was without consideration
of when, or if, Vista will be released. So that gives me some doubts about
how Vista will perform, but maybe they have synchronized the upper level of
architecture long ago, so the rest are on functionalities level.
Back to OP's question, in theory, it is as you think - it wouldn't care and
once it delegated, it will be CPU to handle to job. Now the question is,
how does it know, and if it actually knows, how many processing units are
there AND if its scheduling algorithms are good enough for dispatching the
jobs (so jobs won't be idle) and when results are returned to be further
processed by OS (and application). The first part of the question is why I
asked the previous question about if it's by design or license issue, and
appreciate those who answered, so we know basic version might not even aware
there is an additional unit or won't use it if it is there (if I understood
the answer correctly) which means job dispatching won't make any difference
for those versions. The second part of question will be those who design
Windows and have some benchmark results to tell.
So to boil down the above bla bla bla, the bottlenecks will be the
application's (OS included) scheduling and synchronization abilities plus
those of the CPU to get the performance you want - that is one of reasons,
multiple CPUs (multiple cores or physical ones) are rarely used on PC level
(other than PC server) - but that was before - and anything might be
different at this and future point.
Again, the above is based on my very rough knowledge and welcome additional
corrections and comments.