Making CPUs run faster has become rather difficult and very
expensive. One could simply buy more computers, or put more
full-fledged CPUs on the motherboard, but this is expensive,
somewhat inefficient, and impractical for applications that need
very fast inter-process communication. One way to make a CPU run
faster cheaply is to design it as two, four, eight,... CPUs
(cores) all on the same piece of silicon and let them share the
workload.
This does not work for everything. Some things have to be done
in sequence, and splitting those tasks into multiple processes
and running them in parallel may not be possible, or practical.
Still, a lot of things can be split up and farmed out to
different processors with all on the same piece of silicon. The
dual-core CPUs today are very good at doing such things, but at
most you get twice the speed overall (minus a little bit of
overhead to coordinate use of the cores).
As the core count goes higher, the amount of overhead required to
parcel out and coordinate tasks becomes increasingly complex, and
the gain in speed per additional core diminishes.
One can get around the problem of diminishing returns to some
extent by writing software packages designed to facilitate
parceling out tasks and running them in parallel, but there is a
lot of software written before programmers were concerned about
such things. So, improvements are being made in how to manage
tasks on multiple cores and in designing workflow so it can be
easily parceled out and run on multiple cores simultaneously.
Net result: Computers keep getting faster and the cost of
computing keeps going down, faster than would be the case if
single-core CPUs were all that was available.
Cheers!
jim b.