M
mike
Regarding the trade-off between random & sequential data access for SDR,
DDR & DDR2 I would like to know if this information that I found at the
link below is correct (from which I've summarized into this table):
For base clock rate of 200MHz (5ns)
Initial Random Subsequent Sequential
Data Access Data Access
-------------- ---------------------
SDR 5 ns 5 ns
DDR 10 ns 2.5 ns
DDR2 20 ns 1.25 ns
The reason I'm asking is that I believe this shows that the best
performance for an application that has a very high proportion of random
memory accesses will be achieved with DDR-400 memory as opposed
to DDR2-800. True? By this logic, SDR-200 would be even better although I
don't think there ever was such a thing. For the purpose of this
comparison assume processor speed & cache sizes are the same.
Source:
http:/archives.postgresql.org/pgsql-performance/2006-04/msg00601.php
"Note also what happens when transferring the first datum after a lull
period. For purposes of example, let's pretend that we are talking about a
base clock rate of 200MHz= 5ns.
The SDR still transfers data every 5ns no matter what. The DDR transfers
the 1st datum in 10ns and then assuming there are at least 2 sequential
datums to be transferred will transfer the 2nd and subsequent sequential
pieces of data every 2.5ns. The DDR2 transfers the 1st datum in 20ns and
then assuming there are at least 4 sequential datums to be transferred
will transfer the 2nd and subsequent sequential pieces of data every
1.25ns.
Thus we can see that randomly accessing RAM degrades performance
significantly for DDR and DDR2. We can also see that the conditions for
optimal RAM performance become more restrictive as we go from SDR to DDR to
DDR2. The reason DDR2 with a low base clock rate excelled at tasks like
streaming multimedia and stank at things like small transaction OLTP DB
applications is now apparent."
DDR & DDR2 I would like to know if this information that I found at the
link below is correct (from which I've summarized into this table):
For base clock rate of 200MHz (5ns)
Initial Random Subsequent Sequential
Data Access Data Access
-------------- ---------------------
SDR 5 ns 5 ns
DDR 10 ns 2.5 ns
DDR2 20 ns 1.25 ns
The reason I'm asking is that I believe this shows that the best
performance for an application that has a very high proportion of random
memory accesses will be achieved with DDR-400 memory as opposed
to DDR2-800. True? By this logic, SDR-200 would be even better although I
don't think there ever was such a thing. For the purpose of this
comparison assume processor speed & cache sizes are the same.
Source:
http:/archives.postgresql.org/pgsql-performance/2006-04/msg00601.php
"Note also what happens when transferring the first datum after a lull
period. For purposes of example, let's pretend that we are talking about a
base clock rate of 200MHz= 5ns.
The SDR still transfers data every 5ns no matter what. The DDR transfers
the 1st datum in 10ns and then assuming there are at least 2 sequential
datums to be transferred will transfer the 2nd and subsequent sequential
pieces of data every 2.5ns. The DDR2 transfers the 1st datum in 20ns and
then assuming there are at least 4 sequential datums to be transferred
will transfer the 2nd and subsequent sequential pieces of data every
1.25ns.
Thus we can see that randomly accessing RAM degrades performance
significantly for DDR and DDR2. We can also see that the conditions for
optimal RAM performance become more restrictive as we go from SDR to DDR to
DDR2. The reason DDR2 with a low base clock rate excelled at tasks like
streaming multimedia and stank at things like small transaction OLTP DB
applications is now apparent."