In multiprocessor,we can not assume instruction's order even in one thread? we must write synchronou

  • Thread starter Thread starter zjs
  • Start date Start date
Z

zjs

In multiprocessor,we can not assume instruction's order even in one thread?
we must write synchronous instruction in every function?
for example:

int i,j;
void main()
{
i = 1;
j = 2;
if( i < j)
cout<<"It's ok"<<endl;
else
cout<<"Unbelieveable!!!!"<<endl;//it's possible in multiprocessor??? It's
really true?? just becasue of Memory Caching?
}
Terribly!!
Anyone can tell me the truth?

the whole article in MSDN is followed.

Platform SDK: DLLs, Processes, and Threads
Synchronization and Multiprocessor Issues

Applications may encounter problems when run on multiprocessor systems due
to assumptions they make which are valid only on single-processor systems.

Thread Priorities

Consider a program with two threads, one with a higher priority than the
other. On a single-processor system, the higher priority thread will not
relinquish control to the lower priority thread because the scheduler gives
preference to higher priority threads. On a multiprocessor system, both
threads can run simultaneously, each on its own processor.

Applications should synchronize access to data structures to avoid race
conditions. Code that assumes that higher priority threads run without
interference from lower priority threads will fail on multiprocessor
systems.


Memory Caching

When a processor writes to a memory location, the value is cached to improve
performance. Similarly, the processor attempts to satisfy read requests from
the cache to improve performance. Furthermore, processors begin to fetch
values from memory before they are requested by the application. This can
happen as part of speculative execution or due to cache line issues.

As a result, multiple processors can have different views of the system
memory state because their caches are out of synch. For example, the
following code is not safe on a multiprocessor system:


int iValue;
BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;
}
}

BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else
return FALSE;
}

There is a race condition in this code on multiprocessor systems because the
processor that executes CacheComputedValue the first time may write
fValueHasBeenComputed to main memory before writing iValue to main memory.
Consequently, a second processor executing FetchComputedValue at the same
time reads fValueHasBeenComputed as TRUE, but the new value of iValue is
still in the first processor's cache and has not been written to memory.

Processors can be instructed to force their memory caches to agree with main
memory with special instructions. Such instructions ensure that previous
read and write requests have completed and are made visible to other
processors, and to ensure that that no subsequent read or write requests
have started. Examples are:


Functions which enter or leave critical sections.
Functions which signal synchronization objects.
Wait functions.
Interlocked functions
Consequently, the multiprocessor race condition above can be repaired as
follows:


BOOL volatile fValueHasBeenComputed = FALSE;

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
InterlockedExchange((LONG*)&fValueHasBeenComputed, TRUE);
}
}

The InterlockedExchange function ensures that the value of iValue is updated
for all processors before the value of fValueHasBeenComputed is set to TRUE.
 
In multiprocessor,we can not assume instruction's order even in one thread?
we must write synchronous instruction in every function?
for example:

int i,j;
void main()
{
i = 1;
j = 2;
if( i < j)
cout<<"It's ok"<<endl;
else
cout<<"Unbelieveable!!!!"<<endl;//it's possible in multiprocessor??? It's
really true?? just becasue of Memory Caching?
}
Terribly!!
Anyone can tell me the truth?

You haven't started any threads, so there is no chance i and j will be
modified by another thread, and the code will work as you expect. If it
didn't, every program ever written would be broken.
the whole article in MSDN is followed.

<snip>

That was rather long. If you have a specific question about some part of
it, fire away.
 
"If it didn't, every program ever written would be broken."
I am really afraid of that!!!!!!


But the MSDN said the following code is not safe on a multiprocessor
system:
(nothing about thread ,just about memory caching)

int iValue;
BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;// this maybe done before iValue =
ComputeValue(); because of memory caching.
}
}

BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else
return FALSE;
}

It can be repaired as follows:
BOOL volatile fValueHasBeenComputed = FALSE;

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
InterlockedExchange((LONG*)&fValueHasBeenComputed, TRUE);// important
}
}
 
"If it didn't, every program ever written would be broken."
I am really afraid of that!!!!!!


But the MSDN said the following code is not safe on a multiprocessor
system:
(nothing about thread ,just about memory caching)

int iValue;
BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;// this maybe done before iValue =
ComputeValue(); because of memory caching.
}
}

BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else
return FALSE;
}

It can be repaired as follows:
BOOL volatile fValueHasBeenComputed = FALSE;

void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
InterlockedExchange((LONG*)&fValueHasBeenComputed, TRUE);// important
}
}

You have to read the part in between. :) It says, in part:

<q>
There is a race condition in this code on multiprocessor systems because
the
processor that executes CacheComputedValue the first time may write
fValueHasBeenComputed to main memory before writing iValue to main memory.
Consequently, a second processor executing FetchComputedValue at the same
time reads fValueHasBeenComputed as TRUE, but the new value of iValue is
still in the first processor's cache and has not been written to memory.
</q>

This is talking about two threads executing on different processors in a
multiprocessor system. Note also the "repair" is not complete. For the
InterlockedXXX functions to work their memory barrier magic, everyone has
to play along, including FetchComputedValue. Every read and every write has
to use them, and then declaring fValueHasBeenComputed as volatile is
unnecessary.[*] Of course, there is still a race, as multiple threads can
find themselves inside the "if" statement in CacheComputedValue. It's a lot
easier to use a mutex than try to get this lower-level stuff right.

[*] As of VC 2005, "volatile" is supposed to confer memory barrier
semantics on every read and write. In addition, ISTR it orders operations
WRT non-volatile variables. This is all very non-standard and unusual, but
if true, it would make the use of InterlockedXXX unnecessary in this
example.
 
Back
Top