There is effectively NO chance of automatic parallelisation working
on serial von Neumann code of the sort we know and, er, love. Not
in the near future, not in my lifetime and not as far as anyone can
predict. Forget it.
At least as far as your typical spaghetti C++ is concerned, yeah, not
going to happen anytime in the near future.
This has the consequence that large-scale parallelism is not a viable
general-purpose architecture until and unless we move to a paradigm
that isn't so intractable.
And yet, by that argument there should be no market for the big
parallel servers and supercomputers; yet there is. The solution is
that for things that need the speed, people just write the parallel
code by hand.
If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
out is a chip with 1024 individually slow cores, then those games will
be written to use 1024-way parallelism, just as weather forecasting
and quantum chemistry programs are today. Ditto for Photoshop, 3D
modelling, movie editing, speech recognition etc. There's certainly no
shortage of parallelism in the problem domains. The reason things like
games don't use parallel code today whereas weather forecasting does
isn't because of any software issue, it's because gamers don't have
the money to buy massively parallel supercomputers whereas
organizations doing weather forecasting do. When that changes, so will
the software.