Conversation
nikitamikhaylov
left a comment
There was a problem hiding this comment.
Ok, but it is hard to read in presence of many queue.pop()'. Since we have many kinds of queues like ThreadsQueue, AsyncQueue, why not to introduce another one called Queue. Which will be not just a synonim to std::queue, but will have only three methods: get, push and empty.
|
|
||
| data.task = nullptr; | ||
|
|
||
| lock.unlock(); |
There was a problem hiding this comment.
You can call notify even with locked mutex
| if (!expected->executeTask(data, true)) | ||
| return false; | ||
|
|
||
| expected = nullptr; |
There was a problem hiding this comment.
Also why we need CAS-loop here if we have mutex?
There was a problem hiding this comment.
We don't use mutex to check if there is stopping pipeline task.
We do it in enterConcurrentReadSection or exitConcurrentReadSection, and it could be slow to use mutex here.
mutex is needed to select a thread which executes task
cas loop is need to select a task to execute
| while (auto task = async_task_queue.wait(lock)) | ||
| { | ||
| auto * node = static_cast<ExecutingGraph::Node *>(task.data); | ||
| executor_contexts[task.thread_num]->pushAsyncTask(node); |
There was a problem hiding this comment.
Why do we schedule async task to the same thread (from which prepare was called) ? Why do we schedule async task to the same thread (from which prepare was called) ? Maybe we can take the random thread or thread with the least number of tasks?
There was a problem hiding this comment.
I had an idea that it is safer to continue coroutine from the same thread.
Probably it is not necessary. Just in case.
| else | ||
| thread_to_wake = threads_queue.popAny(); | ||
|
|
||
| lock.unlock(); |
There was a problem hiding this comment.
The idea is: if there is a task to execute, we need to wake up a thread.
The rest is a heuristic: it is better to wake up a thread with has some tasks (but only if it is not sleeping).
Probably we can combine those queues somehow and check both conditions ...
| { | ||
| ExecutingGraph::Node * task = nullptr; | ||
|
|
||
| if (!async_tasks.empty()) |
There was a problem hiding this comment.
| if (!async_tasks.empty()) | |
| if (has_async_tasks = !async_tasks.empty(); has_async_tasks) |
There was a problem hiding this comment.
And we can delete second if statement
There was a problem hiding this comment.
but we should check this after pop ...
| std::atomic_bool has_async_tasks = false; | ||
|
|
||
| /// This objects are used to wait for next available task. | ||
| std::condition_variable condvar; |
There was a problem hiding this comment.
This is the worst name for condition_variable :(
There was a problem hiding this comment.
To me it's ok as long as there is one condvar field ...
But I'll try to improve
|
Performance - 61 slower. +8.324x. OMG. Looks like lock form boost is too low :( |
|
Now performance look alright and tests have passed. Is it still a draft? |
|
@Mergifyio update |
✅ Branch has been successfully updated |
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
Changelog category (leave one):