Z
zjs
In multiprocessor,we can not assume instruction's order even in one thread?
we must write synchronous instruction in every function?
for example:
int i,j;
void main()
{
i = 1;
j = 2;
if( i < j)
cout<<"It's ok"<<endl;
else
cout<<"Unbelieveable!!!!"<<endl;//it's possible in multiprocessor??? It's
really true?? just becasue of Memory Caching?
}
Terribly!!
Anyone can tell me the truth?
the whole article in MSDN is followed.
Platform SDK: DLLs, Processes, and Threads
Synchronization and Multiprocessor Issues
Applications may encounter problems when run on multiprocessor systems due
to assumptions they make which are valid only on single-processor systems.
Thread Priorities
Consider a program with two threads, one with a higher priority than the
other. On a single-processor system, the higher priority thread will not
relinquish control to the lower priority thread because the scheduler gives
preference to higher priority threads. On a multiprocessor system, both
threads can run simultaneously, each on its own processor.
Applications should synchronize access to data structures to avoid race
conditions. Code that assumes that higher priority threads run without
interference from lower priority threads will fail on multiprocessor
systems.
Memory Caching
When a processor writes to a memory location, the value is cached to improve
performance. Similarly, the processor attempts to satisfy read requests from
the cache to improve performance. Furthermore, processors begin to fetch
values from memory before they are requested by the application. This can
happen as part of speculative execution or due to cache line issues.
As a result, multiple processors can have different views of the system
memory state because their caches are out of synch. For example, the
following code is not safe on a multiprocessor system:
int iValue;
BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();
void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;
}
}
BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else
return FALSE;
}
There is a race condition in this code on multiprocessor systems because the
processor that executes CacheComputedValue the first time may write
fValueHasBeenComputed to main memory before writing iValue to main memory.
Consequently, a second processor executing FetchComputedValue at the same
time reads fValueHasBeenComputed as TRUE, but the new value of iValue is
still in the first processor's cache and has not been written to memory.
Processors can be instructed to force their memory caches to agree with main
memory with special instructions. Such instructions ensure that previous
read and write requests have completed and are made visible to other
processors, and to ensure that that no subsequent read or write requests
have started. Examples are:
Functions which enter or leave critical sections.
Functions which signal synchronization objects.
Wait functions.
Interlocked functions
Consequently, the multiprocessor race condition above can be repaired as
follows:
BOOL volatile fValueHasBeenComputed = FALSE;
void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
InterlockedExchange((LONG*)&fValueHasBeenComputed, TRUE);
}
}
The InterlockedExchange function ensures that the value of iValue is updated
for all processors before the value of fValueHasBeenComputed is set to TRUE.
we must write synchronous instruction in every function?
for example:
int i,j;
void main()
{
i = 1;
j = 2;
if( i < j)
cout<<"It's ok"<<endl;
else
cout<<"Unbelieveable!!!!"<<endl;//it's possible in multiprocessor??? It's
really true?? just becasue of Memory Caching?
}
Terribly!!
Anyone can tell me the truth?
the whole article in MSDN is followed.
Platform SDK: DLLs, Processes, and Threads
Synchronization and Multiprocessor Issues
Applications may encounter problems when run on multiprocessor systems due
to assumptions they make which are valid only on single-processor systems.
Thread Priorities
Consider a program with two threads, one with a higher priority than the
other. On a single-processor system, the higher priority thread will not
relinquish control to the lower priority thread because the scheduler gives
preference to higher priority threads. On a multiprocessor system, both
threads can run simultaneously, each on its own processor.
Applications should synchronize access to data structures to avoid race
conditions. Code that assumes that higher priority threads run without
interference from lower priority threads will fail on multiprocessor
systems.
Memory Caching
When a processor writes to a memory location, the value is cached to improve
performance. Similarly, the processor attempts to satisfy read requests from
the cache to improve performance. Furthermore, processors begin to fetch
values from memory before they are requested by the application. This can
happen as part of speculative execution or due to cache line issues.
As a result, multiple processors can have different views of the system
memory state because their caches are out of synch. For example, the
following code is not safe on a multiprocessor system:
int iValue;
BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();
void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;
}
}
BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else
return FALSE;
}
There is a race condition in this code on multiprocessor systems because the
processor that executes CacheComputedValue the first time may write
fValueHasBeenComputed to main memory before writing iValue to main memory.
Consequently, a second processor executing FetchComputedValue at the same
time reads fValueHasBeenComputed as TRUE, but the new value of iValue is
still in the first processor's cache and has not been written to memory.
Processors can be instructed to force their memory caches to agree with main
memory with special instructions. Such instructions ensure that previous
read and write requests have completed and are made visible to other
processors, and to ensure that that no subsequent read or write requests
have started. Examples are:
Functions which enter or leave critical sections.
Functions which signal synchronization objects.
Wait functions.
Interlocked functions
Consequently, the multiprocessor race condition above can be repaired as
follows:
BOOL volatile fValueHasBeenComputed = FALSE;
void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
InterlockedExchange((LONG*)&fValueHasBeenComputed, TRUE);
}
}
The InterlockedExchange function ensures that the value of iValue is updated
for all processors before the value of fValueHasBeenComputed is set to TRUE.