これはアトミック オブジェクトだと理解していますstd::atomic<>
。しかし、どの程度アトミックなのでしょうか? 私の理解では、操作はアトミックになることができます。オブジェクトをアトミックにするとは、具体的にはどういう意味でしょうか? たとえば、2 つのスレッドが同時に次のコードを実行しているとします。
a = a + 12;
すると、操作全体 ( としますadd_twelve_to(int)
) はアトミックでしょうか? それとも、変数への変更 ( としますoperator=()
) はアトミックでしょうか?
ベストアンサー1
std::atomic<>の各インスタンス化と完全特殊化は、未定義の動作を発生させることなく、異なるスレッドが同時に操作できる型 (そのインスタンス) を表します。
アトミック型のオブジェクトは、データ競合が発生しない唯一の C++ オブジェクトです。つまり、あるスレッドがアトミック オブジェクトに書き込み、別のスレッドがそのオブジェクトから読み取りを行う場合、その動作は明確に定義されます。
さらに、アトミック オブジェクトへのアクセスでは、スレッド間の同期を確立し、非アトミック メモリ アクセスを で指定された順序で順序付ける場合があります
std::memory_order
。
std::atomic<>
C++ 11 より前では、(たとえば) MSVC のインターロックされた関数や、 GCC の場合はアトミック ビルトインを使用して実行する必要があった操作をラップします。
また、同期と順序の制約を指定するさまざまなメモリ順序std::atomic<>
を許可することで、より詳細な制御が可能になります。C++ 11 のアトミック性とメモリ モデルについて詳しく知りたい場合は、次のリンクが役立つ場合があります。
- C++ アトミックとメモリ順序
- Comparison: Lockless programming with atomics in C++ 11 vs. mutex and RW-locks
- C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?
- Concurrency in C++11
Note that, for typical use cases, you would probably use overloaded arithmetic operators or another set of them:
std::atomic<long> value(0);
value++; //This is an atomic op
value += 5; //And so is this
Because operator syntax does not allow you to specify the memory order, these operations will be performed with std::memory_order_seq_cst
, as this is the default order for all atomic operations in C++ 11. It guarantees sequential consistency (total global ordering) between all atomic operations.
In some cases, however, this may not be required (and nothing comes for free), so you may want to use more explicit form:
std::atomic<long> value {0};
value.fetch_add(1, std::memory_order_relaxed); // Atomic, but there are no synchronization or ordering constraints
value.fetch_add(5, std::memory_order_release); // Atomic, performs 'release' operation
Now, your example:
a = a + 12;
will not evaluate to a single atomic op: it will result in a.load()
(which is atomic itself), then addition between this value and 12
and a.store()
(also atomic) of final result. As I noted earlier, std::memory_order_seq_cst
will be used here.
However, if you write a += 12
, it will be an atomic operation (as I noted before) and is roughly equivalent to a.fetch_add(12, std::memory_order_seq_cst)
.
As for your comment:
A regular
int
has atomic loads and stores. Whats the point of wrapping it withatomic<>
?
Your statement is only true for architectures that provide such guarantee of atomicity for stores and/or loads. There are architectures that do not do this. Also, it is usually required that operations must be performed on word-/dword-aligned address to be atomic std::atomic<>
is something that is guaranteed to be atomic on every platform, without additional requirements. Moreover, it allows you to write code like this:
void* sharedData = nullptr;
std::atomic<int> ready_flag = 0;
// Thread 1
void produce()
{
sharedData = generateData();
ready_flag.store(1, std::memory_order_release);
}
// Thread 2
void consume()
{
while (ready_flag.load(std::memory_order_acquire) == 0)
{
std::this_thread::yield();
}
assert(sharedData != nullptr); // will never trigger
processData(sharedData);
}
Note that assertion condition will always be true (and thus, will never trigger), so you can always be sure that data is ready after while
loop exits. That is because:
store()
to the flag is performed aftersharedData
is set (we assume thatgenerateData()
always returns something useful, in particular, never returnsNULL
) and usesstd::memory_order_release
order:
memory_order_release
A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable
sharedData
is used afterwhile
loop exits, and thus afterload()
from flag will return a non-zero value.load()
usesstd::memory_order_acquire
order:
std::memory_order_acquire
A load operation with this memory order performs the acquire operation on the affected memory location: no reads or writes in the current thread can be reordered before this load. All writes in other threads that release the same atomic variable are visible in the current thread.
This gives you precise control over the synchronization and allows you to explicitly specify how your code may/may not/will/will not behave. This would not be possible if only guarantee was the atomicity itself. Especially when it comes to very interesting sync models like the release-consume ordering.